People May Punish, Not Blame, Robots

要旨

As robots may take a greater part in our moral decision-making processes, whether people hold them accountable for moral harm becomes critical to explore. Blame and punishment signify moral accountability, often involving emotions. We quantitatively looked into people’s willingness to blame or punish an emotional vs. non-emotional robot that admits to its wrongdoing. Studies 1 and 2 (online video interaction) showed that people may punish a robot due to its lack of perceived emotional capacity than its perceived agency. Study 3 (in the lab) demonstrated that people were neither willing to blame nor punish the robot. Punishing non-emotional robots seems more likely than blaming them, yet punishment towards robots is more likely to arise online than offline. We reflect on if and why victimized humans (and those who care for them) may seek out retributive justice against robot scapegoats when there are no humans to hold accountable for moral harm.

著者
Minha Lee
Eindhoven University of Technology, Eindhoven, Netherlands
Peter Ruijten
Eindhoven University of Technology, Eindhoven, Netherlands
Lily Frank
Technical university of Eindhoven, Eindhoven, N/A, Netherlands
Yvonne de Kort
Technical University of Eindhoven, Eindhoven, Netherlands
Wijnand IJsselsteijn
Technical university of Eindhoven, Eindhoven, Netherlands
DOI

10.1145/3411764.3445284

論文URL

https://doi.org/10.1145/3411764.3445284

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Smart Home, Bot, Robot, & Drone / Input & Measurement

[A] Paper Room 14, 2021-05-12 17:00:00~2021-05-12 19:00:00 / [B] Paper Room 14, 2021-05-13 01:00:00~2021-05-13 03:00:00 / [C] Paper Room 14, 2021-05-13 09:00:00~2021-05-13 11:00:00
Paper Room 14
11 件の発表
2021-05-12 17:00:00
2021-05-12 19:00:00
日本語まとめ
読み込み中…