Explainable AI

会議の名前
CHI 2024
User Characteristics in Explainable AI: The Rabbit Hole of Personalization?
要旨

As Artificial Intelligence (AI) becomes ubiquitous, the need for Explainable AI (XAI) has become critical for transparency and trust among users. A significant challenge in XAI is catering to diverse users, such as data scientists, domain experts, and end-users. Recent research has started to investigate how users' characteristics impact interactions with and user experience of explanations, with a view to personalizing XAI. However, are we heading down a rabbit hole by focusing on unimportant details? Our research aimed to investigate how user characteristics are related to using, understanding, and trusting an AI system that provides explanations. Our empirical study with 149 participants who interacted with an XAI system that flagged inappropriate comments showed that very few user characteristics mattered; only age and the personality trait openness influenced actual understanding. Our work provides evidence to reorient user-focused XAI research and question the pursuit of personalized XAI based on fine-grained user characteristics.

著者
Robert Nimmo
University of Glasgow, Glasgow, United Kingdom
Marios Constantinides
Nokia Bell Labs, Cambridge, United Kingdom
Ke Zhou
Nokia Bell Labs, Cambridge, United Kingdom
Daniele Quercia
Nokia Bell Labs, Cambridge, United Kingdom
Simone Stumpf
University of Glasgow, Glasgow, United Kingdom
論文URL

https://doi.org/10.1145/3613904.3642352

動画
Incremental XAI: Memorable Understanding of AI with Incremental Explanations
要旨

Many explainable AI (XAI) techniques strive for interpretability by providing concise salient information, such as sparse linear factors. However, users either only see inaccurate global explanations, or highly-varying local explanations. We propose to provide more detailed explanations by leveraging the human cognitive capacity to accumulate knowledge by incrementally receiving more details. Focusing on linear factor explanations (factors × values = outcome), we introduce Incremental XAI to automatically partition explanations for general and atypical instances by providing Base + Incremental factors to help users read and remember more faithful explanations. Memorability is improved by reusing base factors and reducing the number of factors shown in atypical cases. In modeling, formative, and summative user studies, we evaluated the faithfulness, memorability and understandability of Incremental XAI against baseline explanation methods. This work contributes towards more usable explanation that users can better ingrain to facilitate intuitive engagement with AI.

著者
Jessica Y. Bo
University of Toronto, Toronto, Ontario, Canada
Pan Hao
National University of Singapore, Singapore, Singapore
Brian Y. Lim
National University of Singapore, Singapore, Singapore
論文URL

https://doi.org/10.1145/3613904.3642689

動画
Why the Fine, AI? The Effect of Explanation Level on Citizens' Fairness Perception of AI-based Discretion in Public Administrations
要旨

The integration of Artificial Intelligence into decision-making processes within public administration extends to AI-systems that exercise administrative discretion. This raises fairness concerns among citizens, possibly leading to AI-systems abandonment. Uncertainty persists regarding explanation elements impacting citizens' perception of fairness and technology adoption level. In a video-vignette online-survey (N=847), we investigated the impact of explanation levels on citizens' perceptions of informational fairness, distributive fairness, and system adoption level. We enhanced explanations in three stages: none, factor explanations, culminating in factor importance explanations. We found that more detailed explanations improved informational and distributive fairness perceptions, but did not affect citizens' willingness to reuse the system. Interestingly, citizens with higher AI-literacy expressed greater willingness to adopt the system, regardless of the explanation levels. Qualitative findings revealed that greater human involvement and appeal mechanisms could positively influence citizens' perceptions. Our findings highlight the importance of citizen-centered design of AI-based decision-making in public administration.

著者
Saja Aljuneidi
OFFIS - Institute for Information Technology, Oldenburg, Germany
Wilko Heuten
OFFIS - Institute for Information Technology, Oldenburg, Germany
Larbi Abdenebaoui
OFFIS - Institute for Information Technology, Oldenburg, Germany
Maria K. Wolters
OFFIS - Institute for Information Technology, Oldenburg, Germany
Susanne Boll
University of Oldenburg, Oldenburg, Germany
論文URL

https://doi.org/10.1145/3613904.3642535

動画
EXMOS: Explanatory Model Steering through Multifaceted Explanations and Data Configurations
要旨

Explanations in interactive machine-learning systems facilitate debugging and improving prediction models. However, the effectiveness of various global model-centric and data-centric explanations in aiding domain experts to detect and resolve potential data issues for model improvement remains unexplored. This research investigates the influence of data-centric and model-centric global explanations in systems that support healthcare experts in optimising models through automated and manual data configurations. We conducted quantitative (n=70) and qualitative (n=30) studies with healthcare experts to explore the impact of different explanations on trust, understandability and model improvement. Our results reveal the insufficiency of global model-centric explanations for guiding users during data configuration. Although data-centric explanations enhanced understanding of post-configuration system changes, a hybrid fusion of both explanation types demonstrated the highest effectiveness. Based on our study results, we also present design implications for effective explanation-driven interactive machine-learning systems.

著者
Aditya Bhattacharya
KU Leuven, Leuven, Vlaams-Brabant, Belgium
Simone Stumpf
University of Glasgow, Glasgow, United Kingdom
Lucija Gosak
University of Maribor, Faculty of Health Sciences, Maribor, Slovenia
Gregor Stiglic
University of Maribor, Maribor, Slovenia
Katrien Verbert
KU Leuven, Leuven, Belgium
論文URL

https://doi.org/10.1145/3613904.3642106

動画
The Who in XAI: How AI Background Shapes Perceptions of AI Explanations
要旨

Explainability of AI systems is critical for users to take informed actions. Understanding who opens the black-box of AI is just as important as opening it. We conduct a mixed-methods study of how two different groups—people with and without AI background—perceive different types of AI explanations. Quantitatively, we share user perceptions along five dimensions. Qualitatively, we describe how AI background can influence interpretations, elucidating the differences through lenses of appropriation and cognitive heuristics. We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design. Carrying critical implications for the field of XAI, our findings showcase how AI generated explanations can have negative consequences despite best intentions and how that could lead to harmful manipulation of trust. We propose design interventions to mitigate them.

著者
Upol Ehsan
Georgia Institute of Technology, Atlanta, Georgia, United States
Samir Passi
Microsoft, Redmond, Washington, United States
Q. Vera Liao
Microsoft Research, Montreal, Quebec, Canada
Larry Chan
Illumio, Sunnyvale, California, United States
I-Hsiang Lee
Georgia Institute of Technology, Atlanta, Georgia, United States
Michael Muller
IBM Research, Cambridge, Massachusetts, United States
Mark O. Riedl
Georgia Tech, Altanta, Georgia, United States
論文URL

https://doi.org/10.1145/3613904.3642474

動画