Graphical Perception of Saliency-based Model Explanations

要旨

In recent years, considerable work has been devoted to explaining predictive, deep learning-based models, and in turn how to evaluate explanations. An important class of evaluation methods are ones that are human-centered, which typically require the communication of explanations through visualizations. And while visualization plays a critical role in perceiving and understanding model explanations, how visualization design impacts human perception of explanations remains poorly understood. In this work, we study the graphical perception of model explanations, specifically, saliency-based explanations for visual recognition models. We propose an experimental design to investigate how human perception is influenced by visualization design, wherein we study the task of alignment assessment, or whether a saliency map aligns with an object in an image. Our findings show that factors related to visualization design decisions, the type of alignment, and qualities of the saliency map all play important roles in how humans perceive saliency-based visual explanations.

著者
Yayan Zhao
Vanderbilt University, Nashville, Tennessee, United States
Mingwei Li
Vanderbilt University, Nashville, Tennessee, United States
Matthew Berger
Vanderbilt University, Nashville, Tennessee, United States
論文URL

https://doi.org/10.1145/3544548.3581320

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Visualization Perception

Hall F
6 件の発表
2023-04-24 23:30:00
2023-04-25 00:55:00