Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making

要旨

How to attribute responsibility for autonomous artificial intelligence (AI) systems' actions has been widely debated across the humanities and social science disciplines. This work presents two experiments (N=200 each) that measure people's perceptions of eight different notions of moral responsibility concerning AI and human agents in the context of bail decision-making. Using real-life adapted vignettes, our experiments show that AI agents are held causally responsible and blamed similarly to human agents for an identical task. However, there was a meaningful difference in how people perceived these agents' moral responsibility; human agents were ascribed to a higher degree of present-looking and forward-looking notions of responsibility than AI agents. We also found that people expect both AI and human decision-makers and advisors to justify their decisions regardless of their nature. We discuss policy and HCI implications of these findings, such as the need for explainable AI in high-stakes scenarios.

著者
Gabriel Lima
KAIST, Daejeon, Korea, Republic of
Nina Grgić-Hlača
Max Planck Institute for Software Systems, Saarbrücken, Germany
Meeyoung Cha
Institute for Basic Science (IBS), Daejeon, Korea, Republic of
DOI

10.1145/3411764.3445260

論文URL

https://doi.org/10.1145/3411764.3445260

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Human, ML & AI

[A] Paper Room 14, 2021-05-10 17:00:00~2021-05-10 19:00:00 / [B] Paper Room 14, 2021-05-11 01:00:00~2021-05-11 03:00:00 / [C] Paper Room 14, 2021-05-11 09:00:00~2021-05-11 11:00:00
Paper Room 14
13 件の発表
2021-05-10 17:00:00
2021-05-10 19:00:00
日本語まとめ
読み込み中…