Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making

要旨

While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be more morally trustworthy but less capable than their AI equivalent. This shows in participants’ reliance on AI: AI recommendations and decisions are accepted more often than the human expert’s. However, AI team experts are perceived to be less responsible than humans, while programmers and sellers of AI systems are deemed partially responsible instead.

受賞
Honorable Mention
著者
Suzanne Tolmeijer
University of Zurich, Zurich, Switzerland
Markus Christen
University of Zurich, Zurich, Zürich, Switzerland
Serhiy Kandul
University of Zurich, Zurich, Switzerland
Markus Kneer
University of Zurich, Zurich, Switzerland
Abraham Bernstein
University of Zurich, Zurich, Zurich, Switzerland
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517732

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Bias and Ethics

292
5 件の発表
2022-05-03 23:15:00
2022-05-04 00:30:00