Health and AI A

会議の名前
CHI 2024
``It Is a Moving Process'': Understanding the Evolution of Explainability Needs of Clinicians in Pulmonary Medicine
要旨

Clinicians increasingly pay attention to Artificial Intelligence (AI) to improve the quality and timeliness of their services. There are converging opinions on the need for Explainable AI (XAI) in healthcare. However, prior work considers explanations as stationary entities with no account for the temporal dynamics of patient care. In this work, we involve 16 Idiopathic Pulmonary Fibrosis (IPF) clinicians from a European university medical centre and investigate their evolving uses and purposes for explainability throughout patient care. By applying a patient journey map for IPF, we elucidate clinicians' informational needs, how human agency and patient-specific conditions can influence the interaction with XAI systems, and the content, delivery, and relevance of explanations over time. We discuss implications for integrating XAI in clinical contexts and more broadly how explainability is defined and evaluated. Furthermore, we reflect on the role of medical education in addressing epistemic challenges related to AI literacy.

著者
Lorenzo Corti
Delft University of Technology, Delft, Netherlands
Rembrandt Oltmans
Delft University of Technology , Delft, Zuid-holland, Netherlands
Jiwon Jung
Delft University of Technology, Delft, Netherlands
Agathe Balayn
Delft University of Technology, Delft, Netherlands
Marlies Wijsenbeek
Erasmus MC, University Medical Center Rotterdam, Rotterdam, Netherlands
Jie Yang
Delft University of Technology, Delft, Netherlands
論文URL

doi.org/10.1145/3613904.3642551

動画
Understanding the Impact of Long-Term Memory on Self-Disclosure with Large Language Model-Driven Chatbots for Public Health Intervention
要旨

Recent large language models (LLMs) offer the potential to support public health monitoring by facilitating health disclosure through open-ended conversations but rarely preserve the knowledge gained about individuals across repeated interactions. Augmenting LLMs with long-term memory (LTM) presents an opportunity to improve engagement and self-disclosure, but we lack an understanding of how LTM impacts people's interaction with LLM-driven chatbots in public health interventions. We examine the case of CareCall—an LLM-driven voice chatbot with LTM—through the analysis of 1,252 call logs and interviews with nine users. We found that LTM enhanced health disclosure and fostered positive perceptions of the chatbot by offering familiarity. However, we also observed challenges in promoting self-disclosure through LTM, particularly around addressing chronic health conditions and privacy concerns. We discuss considerations for LTM integration in LLM-driven chatbots for public health monitoring, including carefully deciding what topics need to be remembered in light of public health goals.

著者
Eunkyung Jo
University of California, Irvine, Irvine, California, United States
Yuin Jeong
NAVER Cloud, Seongnam, Gyeonggi, Korea, Republic of
SoHyun Park
NAVER Cloud, Seongnam, Gyeonggi, Korea, Republic of
Daniel A.. Epstein
University of California, Irvine, Irvine, California, United States
Young-Ho Kim
NAVER AI Lab, Seongnam, Gyeonggi, Korea, Republic of
論文URL

doi.org/10.1145/3613904.3642420

動画
Advancing Patient-Centered Shared Decision-Making with AI Systems for Older Adult Cancer Patients
要旨

Shared decision making (SDM) plays a vital role in clinical practice guidelines, fostering enduring therapeutic communication and patient-clinician relationships. Previous research indicates that active patient participation in decision-making improves satisfaction and treatment outcomes. However, medical decision-making can be intricate and multifaceted. To help make SDM more accessible, we designed a patient-centered Artificial Intelligence (AI) SDM system for older adult cancer patients who lack high health literacy to become more involved in the clinical decision-making process and to improve comprehension toward treatment outcomes. We conducted a pilot feasibility study through 12 preliminary interviews followed by 25 usability testing interviews after the system development, with older adult cancer survivors and clinicians. Results indicated promise in the AI system's ability to enhance SDM, providing personalized healthcare experiences and education for cancer patients. Clinician responses also provided useful suggestions for SDM’s new design and research opportunities in mitigating medical errors and improving clinical efficiency.

著者
Yuexing Hao
Cornell University, Ithaca, New York, United States
Zeyu Liu
Cornell University, Ithaca, New York, United States
Robert N. Riter
Cornell University, Ithaca, New York, United States
Saleh Kalantari
Cornell University, Ithaca, New York, United States
論文URL

doi.org/10.1145/3613904.3642353

動画
Beyond the Waiting Room: Patient's Perspectives on the Conversational Nuances of Pre-Consultation Chatbots
要旨

Pre-consultation serves as a critical information exchange between healthcare providers and patients, streamlining visits and supporting patient-centered care. Human-led pre-consultations offer many benefits, yet they require significant time and energy from clinical staff. In this work, we identify design goals for pre-consultation chatbots given their potential to carry out human-like conversations and autonomously adapt their line of questioning. We conducted a study with 33 walk-in clinic patients to elicit design considerations for pre-consultation chatbots. Participants were exposed to one of two study conditions: an LLM-powered AI agent and a Wizard-of-Oz agent simulated by medical professionals. Our study found that both conditions were equally well-received and demonstrated comparable conversational capabilities. However, the extent of the follow-up questions and the amount of empathy impacted the chatbot's perceived thoroughness and sincerity. Patients also highlighted the importance of setting expectations for the chatbot before and after the pre-consultation experience.

著者
Brenna Li
University of Toronto, Toronto, Ontario, Canada
Ofek Gross
University of Toronto, Toronto, Ontario, Canada
Noah Crampton
University of Toronto, Toronto, Ontario, Canada
Mamta Kapoor
NOSM, North Bay, Ontario, Canada
Saba Tauseef
Independent Researcher, Brampton, Ontario, Canada
Mohit Jain
Microsoft Research, Bangalore, Karnataka, India
Khai N.. Truong
University of Toronto, Toronto, Ontario, Canada
Alex Mariakakis
University of Toronto, Toronto, Ontario, Canada
論文URL

doi.org/10.1145/3613904.3641913

動画
How Much Decision Power Should (A)I Have?: Investigating Patients’ Preferences Towards AI Autonomy in Healthcare Decision Making
要旨

Despite the growing potential of artificial intelligence (AI) in improving clinical decision making, patients' perspectives on the use of AI for their care decision making are underexplored. In this paper, we investigate patients’ preferences towards the autonomy of AI in assisting healthcare decision making. We conducted interviews and an online survey using an interactive narrative and speculative AI prototypes to elicit participants’ preferred choices of using AI in a pregnancy care context. The analysis of the interviews and in-story responses reveals that patients’ preferences for AI autonomy vary per person and context, and may change over time. This finding suggests the need for involving patients in defining and reassessing the appropriate level of AI assistance for healthcare decision making. Departing from these varied preferences for AI autonomy, we discuss implications for incorporating patient-centeredness in designing AI-powered healthcare decision making.

受賞
Honorable Mention
著者
Dajung Kim
Delft University of Technology, Delft, Netherlands
Niko Vegt
Delft University of Technology, Delft, Netherlands
Valentijn Visch
Delft University of Technology, Delft, Netherlands
Marina Bos-de Vos
Delft University of Technology, Delft, Netherlands
論文URL

doi.org/10.1145/3613904.3642883

動画