Understanding the Impact of Explanations on Advice-Taking: a User Study for AI-based Clinical Decision Support Systems

要旨

The field of eXplainable Artificial Intelligence (XAI) focuses on providing explanations for AI systems' decisions. XAI applications to AI-based Clinical Decision Support Systems (DSS) should increase trust in the DSS by allowing clinicians to investigate the reasons behind its suggestions. In this paper, we present the results of a user study on the impact of advice from a clinical DSS on healthcare providers' judgment in two different cases: the case where the clinical DSS explains its suggestion and the case it does not. We examined the weight of advice, the behavioral intention to use the system, and the perceptions with quantitative and qualitative measures. Our results indicate a more significant impact of advice when an explanation for the DSS decision is provided. Additionally, through the open-ended questions, we provide some insights on how to improve the explanations in the diagnosis forecasts for healthcare assistants, nurses, and doctors.

受賞
Honorable Mention
著者
Cecilia Panigutti
Scuola Normale Superiore, Pisa, Italy, Italy
Andrea Beretta
CNR - Italian National Research Council, Pisa, Italy
Fosca Giannotti
Scuola Normale Superiore, Pisa, Italy
Dino Pedreschi
University of Pisa, Pisa, Italy
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502104

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Trust and Control in AI Systems

394
5 件の発表
2022-05-03 18:00:00
2022-05-03 19:15:00