The Who in XAI: How AI Background Shapes Perceptions of AI Explanations

要旨

Explainability of AI systems is critical for users to take informed actions. Understanding who opens the black-box of AI is just as important as opening it. We conduct a mixed-methods study of how two different groups—people with and without AI background—perceive different types of AI explanations. Quantitatively, we share user perceptions along five dimensions. Qualitatively, we describe how AI background can influence interpretations, elucidating the differences through lenses of appropriation and cognitive heuristics. We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design. Carrying critical implications for the field of XAI, our findings showcase how AI generated explanations can have negative consequences despite best intentions and how that could lead to harmful manipulation of trust. We propose design interventions to mitigate them.

著者
Upol Ehsan
Georgia Institute of Technology, Atlanta, Georgia, United States
Samir Passi
Microsoft, Redmond, Washington, United States
Q. Vera Liao
Microsoft Research, Montreal, Quebec, Canada
Larry Chan
Illumio, Sunnyvale, California, United States
I-Hsiang Lee
Georgia Institute of Technology, Atlanta, Georgia, United States
Michael Muller
IBM Research, Cambridge, Massachusetts, United States
Mark O. Riedl
Georgia Tech, Altanta, Georgia, United States
論文URL

https://doi.org/10.1145/3613904.3642474

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Explainable AI

313B
5 件の発表
2024-05-16 20:00:00
2024-05-16 21:20:00