AI for Health

会議の名前
CHI 2023
Assertiveness-based Agent Communication for a Personalized Medicine on Medical Imaging Diagnosis: Assertiveness-based BreastScreening-AI
要旨

Intelligent agents are showing increasing promise for clinical decision-making in a variety of healthcare settings. While a substantial body of work has contributed to the best strategies to convey these agents’ decisions to clinicians, few have considered the impact of personalizing and customizing these communications on the clinicians’ performance and receptiveness. This raises the question of how intelligent agents should adapt their tone in accordance with their target audience. We designed two approaches to communicate the decisions of an intelligent agent for breast cancer diagnosis with different tones: a suggestive (non-assertive) tone and an imposing (assertive) one. We used an intelligent agent to inform about: (1) number of detected findings; (2) cancer severity on each breast and per medical imaging modality; (3) visual scale representing severity estimates; (4) the sensitivity and specificity of the agent; and (5) clinical arguments of the patient, such as pathological co-variables. Our results demonstrate that assertiveness plays an important role in how this communication is perceived and its benefits. We show that personalizing assertiveness according to the professional experience of each clinician can reduce medical errors and increase satisfaction, bringing a novel perspective to the design of adaptive communication between intelligent agents and clinicians.

著者
Francisco Maria Calisto
IST - U. Lisboa, Lisbon, Lisbon, Portugal
João Fernandes
IST - U. Lisboa, Lisbon, Portugal
Margarida Morais
IST - U. Lisboa, Lisbon, Portugal
Carlos Santiago
IST - U. Lisboa, Lisbon, Portugal
João Maria Veigas. Abrantes
Centro Hospitalar de Trás-Os-Montes e Alto Douro, Vila Real, Portugal
Nuno Jardim. Nunes
Instituto Superior Técnico - U. Lisbon, Lisbon, Portugal
Jacinto Nascimento
ISR-IST, LISBOA, LISBOA, Portugal
論文URL

https://doi.org/10.1145/3544548.3580682

動画
Healthcare AI Treatment Decision Support: Design Principles to Enhance Clinician Adoption and Trust
要旨

Artificial intelligence (AI) supported clinical decision support (CDS) technologies can parse vast quantities of patient data into meaningful insights for healthcare providers. Much work is underway to determine the technical feasibility and the accuracy of AI-driven insights. Much less is known about what insights are considered useful and actionable by healthcare providers, their trust in the insights, and clinical workflow integration challenges. Our research team used a conceptual prototype based on AI-generated treatment insights for type 2 diabetes medications to elicit feedback from 41 U.S.-based clinicians, including primary care and internal medicine physicians, endocrinologists, nurse practitioners, physician assistants, and pharmacists. We contribute to the human-computer interaction (HCI) community by describing decision optimization and design objective tensions between population-level and personalized insights, and patterns of use and trust of AI systems. We also contribute a set of 6 design principles for AI-supported CDS.

著者
Eleanor R.. Burgess
Elevance Health, Palo Alto, California, United States
Ivana Jankovic
Elevance Health, Palo Alto, California, United States
Melissa Austin
Elevance Health, Chicago, Illinois, United States
Nancy Cai
Elevance Health, Chicago, Illinois, United States
Adela Kapuścińska
Elevance Health, Warsaw, Poland
Suzanne Currie
Elevance Health, Palo Alto, California, United States
J. Marc Overhage
Elevance Health, Indianapolis, Indiana, United States
Erika S. Poole
Elevance Health, Chicago, Illinois, United States
Jofish Kaye
Elevance Health, Palo Alto, California, United States
論文URL

https://doi.org/10.1145/3544548.3581251

動画
Understanding the Benefits and Challenges of Deploying Conversational AI Leveraging Large Language Models for Public Health Intervention
要旨

Recent large language models (LLMs) have advanced the quality of open-ended conversations with chatbots. Although LLM-driven chatbots have the potential to support public health interventions by monitoring populations at scale through empathetic interactions, their use in real-world settings is underexplored. We thus examine the case of CareCall, an open-domain chatbot that aims to support socially isolated individuals via check-up phone calls and monitoring by teleoperators. Through focus group observations and interviews with 34 people from three stakeholder groups, including the users, the teleoperators, and the developers, we found CareCall offered a holistic understanding of each individual while offloading the public health workload and helped mitigate loneliness and emotional burdens. However, our findings highlight that traits of LLM-driven chatbots led to challenges in supporting public and personal health needs. We discuss considerations of designing and deploying LLM-driven chatbots for public health intervention, including tensions among stakeholders around system expectations.

受賞
Best Paper
著者
Eunkyung Jo
University of California, Irvine, Irvine, California, United States
Daniel A.. Epstein
University of California, Irvine, Irvine, California, United States
Hyunhoon Jung
Naver Clova AI, Seongnam-si, Korea, Republic of
Young-Ho Kim
NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of
論文URL

https://doi.org/10.1145/3544548.3581503

動画
“If I Had All the Time in the World”: Ophthalmologists' Perceptions of Anchoring Bias Mitigation in Clinical AI Support
要旨

Clinical needs and technological advances have resulted in increased use of Artificial Intelligence (AI) in clinical decision support. However, such support can introduce new and amplify existing cognitive biases. Through contextual inquiry and interviews, we set out to understand the use of an existing AI support system by ophthalmologists. We identified concerns regarding anchoring bias and a misunderstanding of the AI's capabilities. Following, we evaluated clinicians' perceptions of three bias mitigation strategies as integrated into their existing decision support system. While clinicians recognised the danger of anchoring bias, we identified a concern around the impact of bias mitigation on procedure time. Our participants were divided in their expectations of any positive impact on diagnostic accuracy, stemming from varying reliance on the decision support. Our results provide insights into the challenges of integrating bias mitigation into AI decision support.

著者
Anne Kathrine Petersen Bach
Aalborg University, Aalborg, Denmark
Trine Munch Nørgaard
Aalborg University, Aalborg, Denmark
Jens Christian Brok
Aalborg University, Aalborg, Denmark
Niels van Berkel
Aalborg University, Aalborg, Denmark
論文URL

https://doi.org/10.1145/3544548.3581513

動画
Rethinking the Role of AI with Physicians in Oncology: Revealing Perspectives from Clinical and Research Workflows
要旨

Significant and rapid advancements in cancer research have been attributed to Artificial Intelligence (AI). However, AI's role and impact on the clinical side has been limited. This discrepancy manifests due to the overlooked, yet profound, differences in the clinical and research practices in oncology. Our contribution seeks to scrutinize physicians' engagement with AI by interviewing 7 medical-imaging experts and disentangle its future alignment across the clinical and research workflows, diverging from the existing "one-size-fits-all" paradigm within Human-Centered AI discourses. Our analysis revealed that physicians' trust in AI is less dependent on their general acceptance of AI, but more on their contestable experiences with AI. Contestability, in clinical workflows, underpins the need for personal supervision of AI outcomes and processes, i.e., clinician-in-the-loop. Finally, we discuss tensions in the desired attributes of AI, such as explainability and control, contextualizing them within the divergent intentionality and scope of clinical and research workflows.

著者
Himanshu Verma
TU Delft, Delft, Netherlands
Jakub Mlynar
University of Applied Sciences of Western Switzerland (HES-SO), Sierre, Switzerland
Roger Schaer
HES-SO, Sierre, Switzerland
Julien Reichenbach
HES-SO, Sierre, Switzerland
Mario Jreige
CHUV, Lausanne, Switzerland
John Prior
Lausanne University Hospital, Lausanne, Switzerland
Florian Evéquoz
HES-SO Valais, Sierre, Switzerland
Adrien Depeursinge
HES-SO Valais, Sierre, Switzerland
論文URL

https://doi.org/10.1145/3544548.3581506

動画
Harnessing Biomedical Literature to Calibrate Clinicians' Trust in AI Decision Support Systems
要旨

Clinical decision support tools (DSTs), powered by Artificial Intelligence (AI), promise to improve clinicians' diagnostic and treatment decision-making. However, no AI model is always correct. DSTs must enable clinicians to validate each AI suggestion, convincing them to take the correct suggestions while rejecting its errors. While prior work often tried to do so by explaining AI's inner workings or performance, we chose a different approach: We investigated how clinicians validated each other's suggestions in practice (often by referencing scientific literature) and designed a new DST that embraces these naturalistic interactions. This design uses GPT-3 to draw literature evidence that shows the AI suggestions' robustness and applicability (or the lack thereof). A prototyping study with clinicians from three disease areas proved this approach promising. Clinicians' interactions with the prototype also revealed new design and research opportunities around (1) harnessing the complementary strengths of literature-based and predictive decision supports; (2) mitigating risks of de-skilling clinicians; and (3) offering low-data decision support with literature.

著者
Qian Yang
Cornell University, Ithaca, New York, United States
Yuexing Hao
Cornell University, Ithaca, New York, United States
Kexin Quan
University of California, San Diego, San Diego, California, United States
Stephen Yang
Cornell University, Ithaca, New York, United States
Yiran Zhao
Cornell Tech, New York, New York, United States
Volodymyr Kuleshov
Cornell Tech, New York, New York, United States
Fei Wang
Weill Cornell Medicine, New York, New York, United States
論文URL

https://doi.org/10.1145/3544548.3581393