Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care

要旨

Artificial intelligence (AI) in healthcare has the potential to improve patient outcomes, but clinician acceptance remains a critical barrier. We developed a novel decision support interface that provides interpretable treatment recommendations for sepsis, a life-threatening condition in which decisional uncertainty is common, treatment practices vary widely, and poor outcomes can occur even with optimal decisions. This system formed the basis of a mixed-methods study in which 24 intensive care clinicians made AI-assisted decisions on real patient cases. We found that explanations generally increased confidence in the AI, but concordance with specific recommendations varied beyond the binary acceptance or rejection described in prior work. Although clinicians sometimes ignored or trusted the AI, they also often prioritized aspects of the recommendations to follow, reject, or delay in a process we term "negotiation." These results reveal novel barriers to adoption of treatment-focused AI tools and suggest ways to better support differing clinician perspectives.

著者
Venkatesh Sivaraman
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Leigh A. Bukowski
University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, United States
Joel Levin
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
Jeremy M.. Kahn
University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, United States
Adam Perer
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://doi.org/10.1145/3544548.3581075

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Trust and Explainable AI

Room X11+X12
6 件の発表
2023-04-24 23:30:00
2023-04-25 00:55:00