AI explanations have been increasingly used to help people better utilize AI recommendations in AI-assisted decision making. While AI explanations may change over time due to updates of the AI model, little is known about how these changes may affect people’s perceptions and usage of the model. In this paper, we study how varying levels of similarity between the AI explanations before and after a model update affects people’s trust in and satisfaction with the AI model. We conduct randomized human-subject experiments on two decision making contexts where people have different levels of domain knowledge. Our results show that changes in AI expla- nation during the model update do not affect people’s tendency to adopt AI recommendations. However, they may change people’s subjective trust in and satisfaction with the AI model via changing both their perceived model accuracy and perceived consistency of AI explanations with their prior knowledge.
https://doi.org/10.1145/3544548.3581366
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)