Fair Machine Guidance to Enhance Fair Decision Making in Biased People

要旨

Teaching unbiased decision-making is crucial for addressing biased decision-making in daily life. Although both raising awareness of personal biases and providing guidance on unbiased decision-making are essential, the latter topics remains under-researched. In this study, we developed and evaluated an AI system aimed at educating individuals on making unbiased decisions using fairness-aware machine learning. In a between-subjects experimental design, 99 participants who were prone to bias performed personal assessment tasks. They were divided into two groups: a) those who received AI guidance for fair decision-making before the task and b) those who received no such guidance but were informed of their biases. The results suggest that although several participants doubted the fairness of the AI system, fair machine guidance prompted them to reassess their views regarding fairness, reflect on their biases, and modify their decision-making criteria. Our findings provide insights into the design of AI systems for guiding fair decision-making in humans.

著者
Mingzhe Yang
The University of Tokyo, Tokyo, Japan
Hiromi Arai
RIKEN, Tokyo, Japan
Naomi Yamashita
NTT, Keihanna, Japan
Yukino Baba
The University of Tokyo, Tokyo, Japan
論文URL

doi.org/10.1145/3613904.3642627

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Ethics of AI

314
5 件の発表
2024-05-14 01:00:00
2024-05-14 02:20:00