Responsibility Attribution in Human Interactions with Everyday AI Systems

要旨

How do individuals perceive AI systems as responsible entities in everyday collaborations between humans and AI? Drawing on psychological literature from attribution theory, praise-blame asymmetries and negativity bias, this study investigated the effects of perspective (actor vs observer) and outcome favorability (positive vs negative) on how participants (N=321) attributed responsibility for outcomes resulting from shared human-AI decision-making. Both Bayesian modelling and reflexive thematic analysis of results revealed that, overall, participants were more likely to attribute greater responsibility to the AI systems. When the outcome was positive, participants were more likely to ascribe shared responsibility to both Human and AI systems, rather than either separately. When the outcome was negative, participants were more likely to attribute responsibility to a single entity, but not consistently towards the human or the AI. These results build on the understanding of how individuals cast blame and praise for shared interactions involving AI systems.

受賞
Honorable Mention
著者
Joe Brailsford
The University of Melbourne, Melbourne, Australia
Frank Vetere
The University of Melbourne, Melbourne, Australia
Eduardo Velloso
University of Sydney, Sydney, New South Wales, Australia
DOI

10.1145/3706598.3713126

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713126

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Tech and AI Literacy

G416+G417
7 件の発表
2025-04-29 18:00:00
2025-04-29 19:30:00
日本語まとめ
読み込み中…