Health and AI B

会議の名前
CHI 2024
The Illusion of Empathy? Notes on Displays of Emotion in Human-Computer Interaction
要旨

From ELIZA to Alexa, Conversational Agents (CAs) have been deliberately designed to elicit or project empathy. Although empathy can help technology better serve human needs, it can also be deceptive and potentially exploitative. In this work, we characterize empathy in interactions with CAs, highlighting the importance of distinguishing evocations of empathy between two humans from ones between a human and a CA. To this end, we systematically prompt CAs backed by large language models (LLMs) to display empathy while conversing with, or about, 65 distinct human identities, and also compare how different LLMs display or model empathy. We find that CAs make value judgments about certain identities, and can be encouraging of identities related to harmful ideologies (e.g., Nazism and xenophobia). Moreover, a computational approach to understanding empathy reveals that despite their ability to display empathy, CAs do poorly when interpreting and exploring a user's experience, contrasting with their human counterparts.

受賞
Honorable Mention
著者
Andrea Cuadra
Stanford University, Stanford, California, United States
Maria Wang
Stanford University, Stanford, California, United States
Lynn Andrea. Stein
Franklin W. Olin College of Engineering, Needham, Massachusetts, United States
Malte F. Jung
Cornell University, Ithaca, New York, United States
Nicola Dell
Cornell Tech, New York, New York, United States
Deborah Estrin
Cornell Tech, New York, New York, United States
James A.. Landay
Stanford University, Stanford, California, United States
論文URL

doi.org/10.1145/3613904.3642336

動画
Rethinking Human-AI Collaboration in Complex Medical Decision Making: A Case Study in Sepsis Diagnosis
要旨

Today's AI systems for medical decision support often succeed on benchmark datasets in research papers but fail in real-world deployment. This work focuses on the decision making of sepsis, an acute life-threatening systematic infection that requires an early diagnosis with high uncertainty from the clinician. Our aim is to explore the design requirements for AI systems that can support clinical experts in making better decisions for the early diagnosis of sepsis. The study begins with a formative study investigating why clinical experts abandon an existing AI-powered Sepsis predictive module in their electrical health record (EHR) system. We argue that a human-centered AI system needs to support human experts in the intermediate stages of a medical decision-making process (e.g., generating hypotheses or gathering data), instead of focusing only on the final decision. Therefore, we build SepsisLab based on a state-of-the-art AI algorithm and extend it to predict the future projection of sepsis development, visualize the prediction uncertainty, and propose actionable suggestions (i.e., which additional laboratory tests can be collected) to reduce such uncertainty. Through heuristic evaluation with six clinicians using our prototype system, we demonstrate that \system enables a promising human-AI collaboration paradigm for the future of AI-assisted sepsis diagnosis and other high-stakes medical decision making.

著者
Shao Zhang
Northeastern Univerisity, Boston, Massachusetts, United States
Jianing Yu
Hong Kong University, Hong Kong, Hong Kong
Xuhai "Orson" Xu
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Changchang Yin
The Ohio State University, Ohio Area, Ohio, United States
Yuxuan Lu
Northeastern University, Boston, Massachusetts, United States
Bingsheng Yao
Rensselaer Polytechnic Institute, Troy, New York, United States
Melanie Tory
Northeastern University, Portland, Maine, United States
Lace M.. Padilla
Northeastern University, Boston, Massachusetts, United States
Jeffrey Caterino
The Ohio State University, Ohio Area, Ohio, United States
Ping Zhang
The Ohio State University, Ohio Area, Ohio, United States
Dakuo Wang
Northeastern University, Boston, Massachusetts, United States
論文URL

doi.org/10.1145/3613904.3642343

動画
Leveraging Prompt-Based Large Language Models: Predicting Pandemic Health Decisions and Outcomes Through Social Media Language
要旨

We introduce a multi-step reasoning framework using prompt-based LLMs to examine the relationship between social media language patterns and trends in national health outcomes. Grounded in fuzzy-trace theory, which emphasizes the importance of “gists” of causal coherence in effective health communication, we introduce Role-Based Incremental Coaching (RBIC), a prompt-based LLM framework, to identify gists at-scale. Using RBIC, we systematically extract gists from subreddit discussions opposing COVID-19 health measures (Study 1). We then track how these gists evolve across key events (Study 2) and assess their influence on online engagement (Study 3). Finally, we investigate how the volume of gists is associated with national health trends like vaccine uptake and hospitalizations (Study 4). Our work is the first to empirically link social media linguistic patterns to real-world public health trends, highlighting the potential of prompt-based LLMs in identifying critical online discussion patterns that can form the basis of public health communication strategies.

著者
Xiaohan Ding
Virginia Tech, Blacksburg, Virginia, United States
Buse Carik
Virginia Tech, Blacksburg, Virginia, United States
Uma Sushmitha Gunturi
IBM , Mountain View, California, United States
Valerie Reyna
Cornell University, Ithaca, New York, United States
Eugenia H. Rho
Virginia Tech, Blacksburg, Virginia, United States
論文URL

doi.org/10.1145/3613904.3642117

動画
Multimodal Healthcare AI: Identifying and Designing Clinically Relevant Vision-Language Applications for Radiology
要旨

Recent advances in AI combine large language models (LLMs) with vision encoders that bring forward unprecedented technical capabilities to leverage for a wide range of healthcare applications. Focusing on the domain of radiology, vision-language models (VLMs) achieve good performance results for tasks such as generating radiology findings based on a patient's medical image, or answering visual questions (e.g., ``Where are the nodules in this chest X-ray?''). However, the clinical utility of potential applications of these capabilities is currently underexplored. We engaged in an iterative, multidisciplinary design process to envision clinically relevant VLM interactions, and co-designed four VLM use concepts: Draft Report Generation, Augmented Report Review, Visual Search and Querying, and Patient Imaging History Highlights. We studied these concepts with 13 radiologists and clinicians who assessed the VLM concepts as valuable, yet articulated many design considerations. Reflecting on our findings, we discuss implications for integrating VLM capabilities in radiology, and for healthcare AI more generally.

著者
Nur Yildirim
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Hannah Richardson
Microsoft Health Futures, Cambridge, United Kingdom
Maria Teodora Wetscherek
Cambridge University Hospitals NHS Foundation Trust, Cambridge, United Kingdom
Junaid Bajwa
Microsoft Health Futures, Cambridge, United Kingdom
Joseph Jacob
University College London, London, United Kingdom
Mark Ames. Pinnock
University College London, London, United Kingdom
Stephen Harris
University College London Hospital NHS Foundation Trust, London, United Kingdom
Daniel Coelho de Castro
Microsoft Health Futures, Cambridge, United Kingdom
Shruthi Bannur
Microsoft Health Futures, Cambridge, United Kingdom
Stephanie Hyland
Microsoft Health Futures, Cambridge, United Kingdom
Pratik Ghosh
Microsoft Health Futures, Cambridge, United Kingdom
Mercy Ranjit
Microsoft Health Futures, Bengaluru, India
Kenza Bouzid
Microsoft Health Futures, Cambridge, United Kingdom
Anton Schwaighofer
Microsoft Health Futures, Cambridge, United Kingdom
Fernando Pérez-García
Microsoft Health Futures, Cambridge, United Kingdom
Harshita Sharma
Microsoft Health Futures, Cambridge, United Kingdom
Ozan Oktay
Microsoft Health Futures, Cambridge, United Kingdom
Matthew Lungren
Microsoft Nuance, Palo Alto, California, United States
Javier Alvarez-Valle
Microsoft Health Futures, Cambridge, United Kingdom
Aditya Nori
Microsoft Health Futures, Cambridge, United Kingdom
Anja Thieme
Microsoft Health Futures, Cambridge, United Kingdom
論文URL

doi.org/10.1145/3613904.3642013

動画
Human-Algorithmic Interaction Using a Large Language Model-Augmented Artificial Intelligence Clinical Decision Support System
要旨

Integration of artificial intelligence (AI) into clinical decision support systems (CDSS) poses a socio-technological challenge that is impacted by usability, trust, and human-computer interaction (HCI). AI-CDSS interventions have shown limited benefit in clinical outcomes, which may be due to insufficient understanding of how health-care providers interact with AI systems. Large language models (LLMs) have the potential to enhance AI-CDSS, but haven't been studied in either simulated or real-world clinical scenarios. We present findings from a randomized controlled trial deploying AI-CDSS for the management of upper gastrointestinal bleeding (UGIB) with and without an LLM interface within realistic clinical simulations for physician and medical student participants. We find evidence that LLM augmentation improves ease-of-use, that LLM-generated responses with citations improve trust, and HCI varies based on clinical expertise. Qualitative themes from interviews suggest the perception of LLM-augmented AI-CDSS as a team-member used to confirm initial clinical intuitions and help evaluate borderline decisions.

著者
Niroop Channa. Rajashekar
Yale School of Medicine, New Haven, Connecticut, United States
Yeo Eun Shin
Yale Medicine, New Haven, Connecticut, United States
Yuan Pu
Yale School of Medicine, New Haven, Connecticut, United States
Sunny Chung
Yale School of Medicine, New Haven, Connecticut, United States
Kisung You
CUNY Baruch College, New York, New York, United States
Mauro Giuffre
Yale School of Medicine, New Haven, Connecticut, United States
Colleen E. Chan
Yale University , New Haven, Connecticut, United States
Theo Saarinen
University of California, Berkeley, Berkeley, California, United States
Allen Hsiao
Yale School of Medicine, New Haven, Connecticut, United States
Jasjeet Sekhon
Yale University, New Haven, Connecticut, United States
Ambrose H. Wong
Yale University, New Haven, Connecticut, United States
Leigh V. Evans
Yale School of Medicine, New Haven, Connecticut, United States
Rene F.. Kizilcec
Cornell University, Ithaca, New York, United States
Loren Laine
Yale School of Medicine, New Havent, Connecticut, United States
Terika McCall
Yale School of Public Health, New Haven, Connecticut, United States
Dennis Shung
Yale School of Medicine, New Haven, Connecticut, United States
論文URL

doi.org/10.1145/3613904.3642024

動画