Blaming Humans and Machines: What Shapes People's Reactions to Algorithmic Harm

要旨

Artificial intelligence (AI) systems can cause harm to people. This research examines how individuals react to such harm through the lens of blame. Building upon research suggesting that people blame AI systems, we investigated how several factors influence people's reactive attitudes towards machines, designers, and users. The results of three studies (N = 1,153) indicate differences in how blame is attributed to these actors. Whether AI systems were explainable did not impact blame directed at them, their developers, and their users. Considerations about fairness and harmfulness increased blame towards designers and users but had little to no effect on judgments of AI systems. Instead, what determined people's reactive attitudes towards machines was whether people thought blaming them would be a suitable response to algorithmic harm. We discuss implications, such as how future decisions about including AI systems in the social and moral spheres will shape laypeople's reactions to AI-caused harm.

著者
Gabriel Lima
KAIST, Daejeon, Korea, Republic of
Nina Grgić-Hlača
Max Planck Institute for Software Systems, Saarbrücken, Germany
Meeyoung Cha
Institute for Basic Science (IBS), Daejeon, Korea, Republic of
論文URL

https://doi.org/10.1145/3544548.3580953

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Humans and Machines

Hall G1
6 件の発表
2023-04-25 20:10:00
2023-04-25 21:35:00