Machine learning models need to provide contrastive explanations, since people often seek to understand why a puzzling prediction occurred instead of some expected outcome. Current contrastive explanations are rudimentary comparisons between examples or raw features, which remain difficult to interpret, since they lack semantic meaning. We argue that explanations must be more relatable to other concepts, hypotheticals, and associations. Inspired by the perceptual process from cognitive psychology, we propose the XAI Perceptual Processing framework and RexNet model for relatable explainable AI with Contrastive Saliency, Counterfactual Synthetic, and Contrastive Cues explanations. We investigated the application of vocal emotion recognition, and implemented a modular multi-task deep neural network to predict and explain emotions from speech. From think-aloud and controlled studies, we found that {\color{blue}counterfactual explanations were useful and further enhanced with semantic cues, but not saliency explanations}. This work provides insights into providing and evaluating relatable contrastive explainable AI for perception applications.
https://dl.acm.org/doi/abs/10.1145/3491102.3501826
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)