Vinesh Ravuri, Projna Paromita, Karel Mundnich, Amrutha Nadarajan, Brandon M. Booth, Shrikanth S. Narayanan, and Theodora Chaspari. Investigating Group-Specific Models of Hospital Workers’ Well-Being: Implications for Algorithmic Bias. International Journal of Semantic Computing, 14(04):477–499, 2020.

Download

[HTML] 

Abstract

Hospital workers often experience burnout due to the demanding job responsibilities and long work hours. Data yielding from ambulatory monitoring combined with machine learning algorithms can afford us a better understanding of the naturalistic processes that contribute to this burnout. Motivated by the challenges related to the accurate tracking of well-being in real-life, prior work has investigated group-specific machine learning (GS-ML) models that are tailored to groups of participants. We examine a novel GS-ML for estimating well-being from real-life multimodal measures collected in situ from hospital workers. In contrast to the majority of prior work that uses pre-determined clustering criteria, we propose an iterative procedure that refines participant clusters based on the representations learned by the GS-ML models. Motivated by prior work that highlights the differential impact of job demands on well-being, we further explore the participant clusters in terms of demography and job-related attributes. Results indicate that the GS-ML models mostly outperform general models in estimating well-being constructs. The GS-ML models further depict different degrees of predictive power for each participant cluster, as distinguished upon age, education, occupational role, and number of supervisees. The observed discrepancies with respect to the GS-ML model decisions are discussed in association with algorithmic bias.

BibTeX Entry

@article{Ravuri-IJSC2020,
author = {Ravuri, Vinesh and Paromita, Projna and Mundnich, Karel and Nadarajan, Amrutha and Booth, Brandon M. and Narayanan, Shrikanth S. and Chaspari, Theodora},
title = {Investigating Group-Specific Models of Hospital Workers’ Well-Being: Implications for Algorithmic Bias},
journal = {International Journal of Semantic Computing},
volume = {14},
number = {04},
pages = {477-499},
year = {2020},
doi = {10.1142/S1793351X20500075},
URL = { 
        https://doi.org/10.1142/S1793351X20500075
},
eprint = { 
        https://doi.org/10.1142/S1793351X20500075
}
,
    abstract = { Hospital workers often experience burnout due to the demanding job responsibilities and long work hours. Data yielding from ambulatory monitoring combined with machine learning algorithms can afford us a better understanding of the naturalistic processes that contribute to this burnout. Motivated by the challenges related to the accurate tracking of well-being in real-life, prior work has investigated group-specific machine learning (GS-ML) models that are tailored to groups of participants. We examine a novel GS-ML for estimating well-being from real-life multimodal measures collected in situ from hospital workers. In contrast to the majority of prior work that uses pre-determined clustering criteria, we propose an iterative procedure that refines participant clusters based on the representations learned by the GS-ML models. Motivated by prior work that highlights the differential impact of job demands on well-being, we further explore the participant clusters in terms of demography and job-related attributes. Results indicate that the GS-ML models mostly outperform general models in estimating well-being constructs. The GS-ML models further depict different degrees of predictive power for each participant cluster, as distinguished upon age, education, occupational role, and number of supervisees. The observed discrepancies with respect to the GS-ML model decisions are discussed in association with algorithmic bias. }
}

Generated by bib2html.pl (written by Patrick Riley ) on Fri Oct 01, 2021 10:50:38