Enterprises have recently adopted AI to human resource management (HRM) to evaluate employees’ work performance evaluation. However, in such an HRM context where multiple stakeholders are complexly intertwined with different incentives, it is problematic to design AI reflecting one stakeholder group’s needs (e.g., enterprises, HR managers). Our research aims to investigate what tensions surrounding AI in HRM exist among stakeholders and explore design solutions to balance the tensions. By conducting stakeholder-centered participatory workshops with diverse stakeholders (including employees, employers/HR teams, and AI/business experts), we identified five major tensions: 1) divergent perspectives on fairness, 2) the accuracy of AI, 3) the transparency of the algorithm and its decision process, 4) the interpretability of algorithmic decisions, and 5) the trade off between productivity and inhumanity. We present stakeholder-centered design ideas for solutions to mitigate these tensions and further discuss how to promote harmony among various stakeholders at the workplace.
https://dl.acm.org/doi/abs/10.1145/3491102.3517672
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)