"Help Me Help the AI": Understanding How Explainability Can Support Human-AI Interaction

要旨

Despite the proliferation of explainable AI (XAI) methods, little is understood about end-users' explainability needs and behaviors around XAI explanations. To address this gap and contribute to understanding how explainability can support human-AI interaction, we conducted a mixed-methods study with 20 end-users of a real-world AI application, the Merlin bird identification app, and inquired about their XAI needs, uses, and perceptions. We found that participants desire practically useful information that can improve their collaboration with the AI, more so than technical system details. Relatedly, participants intended to use XAI explanations for various purposes beyond understanding the AI's outputs: calibrating trust, improving their task skills, changing their behavior to supply better inputs to the AI, and giving constructive feedback to developers. Finally, among existing XAI approaches, participants preferred part-based explanations that resemble human reasoning and explanations. We discuss the implications of our findings and provide recommendations for future XAI design.

受賞
Honorable Mention
著者
Sunnie S. Y. Kim
Princeton University, Princeton, New Jersey, United States
Elizabeth A. Watkins
Intel Labs, Santa Clara, California, United States
Olga Russakovsky
Princeton University, Princeton, New Jersey, United States
Ruth Fong
Princeton University, Princeton, New Jersey, United States
Andrés Monroy-Hernández
Princeton University, Princeton, New Jersey, United States
論文URL

https://doi.org/10.1145/3544548.3581001

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Explainable, Responsible, Manageable AI

Hall D
6 件の発表
2023-04-26 18:00:00
2023-04-26 19:30:00