Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning

要旨

Model explanations such as saliency maps can improve user trust in AI by highlighting important features for a prediction. However, these become distorted and misleading when explaining predictions of images that are subject to systematic error (bias) by perturbations and corruptions. Furthermore, the distortions persist despite model fine-tuning on images biased by different factors (blur, color temperature, day/night). We present Debiased-CAM to recover explanation faithfulness across various bias types and levels by training a multi-input, multi-task model with auxiliary tasks for explanation and bias level predictions. In simulation studies, the approach not only enhanced prediction accuracy, but also generated highly faithful explanations about these predictions as if the images were unbiased. In user studies, debiased explanations im- proved user task performance, perceived truthfulness and perceived helpfulness. Debiased training can provide a versatile platform for robust performance and explanation faithfulness for a wide range of applications with data biases.

著者
Wencan Zhang
School of Computing, National University of Singapore, Singapore, Singapore
Mariella Dimiccoli
CSIC-UPC, Barcelona, Spain
Brian Y. Lim
National University of Singapore, Singapore, Singapore
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517522

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Mistakes, Explainability

383-385
5 件の発表
2022-05-03 18:00:00
2022-05-03 19:15:00