Watch Out For Updates: Understanding the Effects of Model Explanation Updates in AI-Assisted Decision Making

要旨

AI explanations have been increasingly used to help people better utilize AI recommendations in AI-assisted decision making. While AI explanations may change over time due to updates of the AI model, little is known about how these changes may affect people’s perceptions and usage of the model. In this paper, we study how varying levels of similarity between the AI explanations before and after a model update affects people’s trust in and satisfaction with the AI model. We conduct randomized human-subject experiments on two decision making contexts where people have different levels of domain knowledge. Our results show that changes in AI expla- nation during the model update do not affect people’s tendency to adopt AI recommendations. However, they may change people’s subjective trust in and satisfaction with the AI model via changing both their perceived model accuracy and perceived consistency of AI explanations with their prior knowledge.

著者
Xinru Wang
Purdue University, West Lafayette, Indiana, United States
Ming Yin
Purdue University, West Lafayette, Indiana, United States
論文URL

https://doi.org/10.1145/3544548.3581366

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Trust and Explainable AI

Room X11+X12
6 件の発表
2023-04-24 23:30:00
2023-04-25 00:55:00