User Characteristics in Explainable AI: The Rabbit Hole of Personalization?

要旨

As Artificial Intelligence (AI) becomes ubiquitous, the need for Explainable AI (XAI) has become critical for transparency and trust among users. A significant challenge in XAI is catering to diverse users, such as data scientists, domain experts, and end-users. Recent research has started to investigate how users' characteristics impact interactions with and user experience of explanations, with a view to personalizing XAI. However, are we heading down a rabbit hole by focusing on unimportant details? Our research aimed to investigate how user characteristics are related to using, understanding, and trusting an AI system that provides explanations. Our empirical study with 149 participants who interacted with an XAI system that flagged inappropriate comments showed that very few user characteristics mattered; only age and the personality trait openness influenced actual understanding. Our work provides evidence to reorient user-focused XAI research and question the pursuit of personalized XAI based on fine-grained user characteristics.

著者
Robert Nimmo
University of Glasgow, Glasgow, United Kingdom
Marios Constantinides
Nokia Bell Labs, Cambridge, United Kingdom
Ke Zhou
Nokia Bell Labs, Cambridge, United Kingdom
Daniele Quercia
Nokia Bell Labs, Cambridge, United Kingdom
Simone Stumpf
University of Glasgow, Glasgow, United Kingdom
論文URL

https://doi.org/10.1145/3613904.3642352

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Explainable AI

313B
5 件の発表
2024-05-16 20:00:00
2024-05-16 21:20:00