Towards Relatable Explainable AI with the Perceptual Process

要旨

Machine learning models need to provide contrastive explanations, since people often seek to understand why a puzzling prediction occurred instead of some expected outcome. Current contrastive explanations are rudimentary comparisons between examples or raw features, which remain difficult to interpret, since they lack semantic meaning. We argue that explanations must be more relatable to other concepts, hypotheticals, and associations. Inspired by the perceptual process from cognitive psychology, we propose the XAI Perceptual Processing framework and RexNet model for relatable explainable AI with Contrastive Saliency, Counterfactual Synthetic, and Contrastive Cues explanations. We investigated the application of vocal emotion recognition, and implemented a modular multi-task deep neural network to predict and explain emotions from speech. From think-aloud and controlled studies, we found that {\color{blue}counterfactual explanations were useful and further enhanced with semantic cues, but not saliency explanations}. This work provides insights into providing and evaluating relatable contrastive explainable AI for perception applications.

受賞
Best Paper
著者
Wencan Zhang
School of Computing, National University of Singapore, Singapore, Singapore
Brian Y. Lim
National University of Singapore, Singapore, Singapore
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501826

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Mistakes, Explainability

383-385
5 件の発表
2022-05-03 18:00:00
2022-05-03 19:15:00