Skin-Deep Bias: How Avatar Appearances Shape Perceptions of AI Hiring

要旨

Artificial intelligence is increasingly used in hiring, raising concerns about how applicants perceive these systems. While prior work on algorithmic fairness has emphasized technical bias mitigation, little is known about how avatar identity cues influence applicants’ justice attributions in an interview context. We conducted a crowdsourcing study with 215 participants who completed an interview with photorealistic AI avatars varied in phenotypic traits (race and sex), followed by a standardized rejection. Using self-reports, sentiment analysis, and eye tracking, we measured perceptions of trust, fairness, and bias. Results show that racial mismatch heightened perceptions of ethnic bias, while partial match (sharing only one identity) reduced fairness judgments compared to both full and no match. This work extends the Computers-Are-Social-Actors paradigm by demonstrating that avatar appearances shape justice-related evaluations of AI. We contribute to HCI by revealing how identity cues influence fairness attributions and offer actionable insights for designing equitable AI interview systems.

受賞
Honorable Mention
著者
Ka Hei Carrie Lau
Technical University of Munich, Munich, Germany
Philipp Stark
Lund University, Lund, Sweden
Efe Bozkir
Technical University of Munich, Munich, Germany
Enkelejda Kasneci
Technical University of Munich, Munich, Germany

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: AI in Work and Expertise

P1 - Room 124
7 件の発表
2026-04-13 20:15:00
2026-04-13 21:45:00