Incremental XAI: Memorable Understanding of AI with Incremental Explanations

要旨

Many explainable AI (XAI) techniques strive for interpretability by providing concise salient information, such as sparse linear factors. However, users either only see inaccurate global explanations, or highly-varying local explanations. We propose to provide more detailed explanations by leveraging the human cognitive capacity to accumulate knowledge by incrementally receiving more details. Focusing on linear factor explanations (factors × values = outcome), we introduce Incremental XAI to automatically partition explanations for general and atypical instances by providing Base + Incremental factors to help users read and remember more faithful explanations. Memorability is improved by reusing base factors and reducing the number of factors shown in atypical cases. In modeling, formative, and summative user studies, we evaluated the faithfulness, memorability and understandability of Incremental XAI against baseline explanation methods. This work contributes towards more usable explanation that users can better ingrain to facilitate intuitive engagement with AI.

著者
Jessica Y. Bo
University of Toronto, Toronto, Ontario, Canada
Pan Hao
National University of Singapore, Singapore, Singapore
Brian Y. Lim
National University of Singapore, Singapore, Singapore
論文URL

doi.org/10.1145/3613904.3642689

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Explainable AI

313B
5 件の発表
2024-05-16 20:00:00
2024-05-16 21:20:00