"I think this is fair": Uncovering the Complexities of Stakeholder Decision-Making in AI Fairness Assessment

要旨

Assessing fairness in artificial intelligence (AI) typically involves AI experts who select protected features, fairness metrics, and set fairness thresholds to assess outcome fairness. However, little is known about how stakeholders, particularly those affected by AI outcomes but lacking AI expertise, assess fairness. To address this gap, we conducted a qualitative study with 26 stakeholders without AI expertise, representing potential decision subjects in a credit rating scenario, to examine how they assess fairness when placed in the role of deciding on features with priority, metrics, and thresholds. We reveal that stakeholders' fairness decisions are more complex than typical AI expert practices: they considered features far beyond legally protected features, tailored metrics for specific contexts, set diverse yet stricter fairness thresholds, and even preferred designing customized fairness. Our results extend the understanding of how stakeholders can meaningfully contribute to AI fairness governance and mitigation, underscoring the importance of incorporating stakeholders' nuanced fairness judgments.

著者
Lin Luo
University of Glasgow, Glasgow, United Kingdom
Yuri Nakao
FUJITSU LIMITED, Kawasaki, Kanagawa pref., Japan
Mathieu Chollet
University of Glasgow, Glasgow, United Kingdom
Hiroya Inakoshi
Fujitsu Limited, Kawasaki, Japan
Simone Stumpf
University of Glasgow, Glasgow, United Kingdom

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Explaining and Evaluating AI Systems

Area 1 + 2 + 3: theatre
7 件の発表
2026-04-16 20:15:00
2026-04-16 21:45:00