Mental Health Chatbots and Conversational Agents

会議の名前
CHI 2026
"It Seems to Understand My Heart": An Empirical Study of Persona-Driven Persuasive AI Agent for Aging-in-Place in Singapore
要旨

Persona-based, empathetic approaches can foster sustainable long-term user-agent engagement in aging-in-place contexts. We present PersonaBot, a persona-driven persuasive agent built on a Dual-Persona framework that constructs user personas and generates culturally diverse, gender- and personality-varied agent personas, pairing users with preferred agent personas and adapting them over time. In an eight-week field deployment (8 participants; 1005 participant messages; 2432 agent messages), PersonaBot significantly increased perceived empathy, slowed engagement decline relative to a non-persona baseline, and elicited more elaborative interactions. Effectiveness varied with users’ technological self-efficacy, autonomy preferences, cultural identity, and social patterns, underscoring heterogeneous persona needs. Contrary to our initial assumptions, participants sometimes chose cross-cultural agents for perceived professionalism (over demographic similarity) and favored teacher-like personas balancing authority and warmth. Many framed the agent as a co-pilot rather than a caregiver replacement and engaged selectively, indicating agent personas should respect autonomy and invite—rather than demand—interaction.

著者
BO GAO
LILY(Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly), Singapore, Singapore
Zhiwei Zeng
Nanyang Technological University, Singapore, Singapore
Yue Yu
Nanyang Technological University, Singapore, Singapore
Iain P. Werry
Nanyang Technological University, Singapore, Singapore
Chung Leung Chan
Nanyang Technological University, Singapore, Singapore
MING CHEN
https://www.ntu.edu.sg/lily, Singapore, Singapore
Huiguo Zhang
Nanyang Technological University, Singapore, Singapore
Bo Huang
Nanyang Technological University, Singapore, Singapore, Singapore
jun ji
ntu, Singapore, Singapore
Cyril Leung
The University of British Columbia, Vancouver, British Columbia, Canada
Chunyan Miao
Nanyang Technological University, Singapore, Singapore
動画
Fit Matters: Format–Distance Alignment Improves Conversational Search
要旨

Existing conversational search systems can synthesize information into responses, but they lack principled ways to adapt response formats to users' cognitive states. This paper investigates whether aligning format and distance, which involves matching information granularity and media to users' psychological distance, improves user experience. In a between-subjects experiment (N=464) on travel planning, we crossed two distance dimensions (temporal/spatial × near/far) with four formats varying in granularity (abstract/concrete) and media (text/image-and-text). The experiment established that format-distance alignment reduced users' risk perceptions while increasing decision confidence, perceptions of information usefulness, ease of use, enjoyment, and credibility, and adoption intentions. Concrete formats imposed higher cognitive load, but yielded productive effort when matched to near-distance tasks. Images enhanced concrete but not abstract text, suggesting multimedia benefits depend on complementarity. These findings establish format-distance alignment as a distinctive and important design dimension, enabling systems to tailor response formats to users' psychological distance.

著者
Yitian Yang
National University of Singapore, Singapore, Singapore
Yugin Tan
National University of Singapore, Singapore, Singapore
Jung-Tai King
National Dong Hwa University, Hualien, Taiwan
Yang Chen Lin
National Tsing Hua University , Hsinchu , Taiwan
YI-CHIEH LEE
National University of Singapore, Singapore, Singapore
Towards Better Health Conversations: The Benefits of Context-seeking
要旨

Navigating health questions can be daunting in the modern information landscape. Large language models (LLMs) may provide tailored, accessible information, but also risk being inaccurate, biased or misleading. We present insights from 5 mixed-methods studies (total N=261), examining how people interact with LLMs for their own health questions. Qualitative studies revealed the importance of context-seeking in conversational AIs to elicit specific details a person may not volunteer or know to share. Context-seeking by LLMs was valued by participants, even if it meant deferring an answer for several turns. Incorporating these insights, we developed a “Wayfinding AI” to proactively solicit context. In two randomized, blinded studies, participants rated the Wayfinding AI as more helpful, relevant, and tailored to their concerns compared to a baseline AI. These results demonstrate the strong impact of proactive context-seeking on conversational dynamics, and suggest design patterns for conversational AI to help navigate health topics.

著者
Rory Sayres
Google, Mountain View, California, United States
Yuexing Hao
Google, Mountain View, California, United States
Abbi Ward
Google, Mountain View, California, United States
Amy Wang
Google, Mountain View, California, United States
Beverly Freeman
Google, Mountain View, California, United States
Serena Zhan
Google, Mountain View, California, United States
Diego Ardila
Google, Mountain View, California, United States
Jimmy Li
Google, Mountain View, California, United States
I-Ching Lee
Google, Mountain View, California, United States
Anna Iurchenko
Google, Mountain View, California, United States
Siyi Kou
Google Research, Seattle, Washington, United States
Kartikeya Badola
Google, Mountain View, California, United States
Jimmy Hu
Google, Mountain View, California, United States
Bhawesh Kumar
Google, Mountain View, California, United States
Keith Y. Johnson
Google, Mountain View, California, United States
Supriya Vijay
Google, Mountain View, California, United States
Justin Krogue
Google, Mountain View, California, United States
Avinatan Hassidim
Google Research, Tel Aviv, Israel
Yossi Matias
Google, Mountain View, California, United States
Dale Webster
Google, Mountain View, California, United States
Sunny Virmani
Google, Mountain View, California, United States
Yun Liu
Google, Mountain View, California, United States
Quang Duong
Google, Mountain View, California, United States
Mike Schaekermann
Google Research, Mountain View, California, United States
When EmotionTech Causes Harm: The Case of Therapeutic XR
要旨

Emotional harm and discomfort in therapeutic extended realities (XR) remains underexamined, even as immersive tools are increasingly deployed in healthcare contexts. We frame therapeutic XR as EmotionTech and reflect on 12 cases from 9 researchers and designers through interviews and workshops. We locate four concerns for emotional harm and identify ways to address them: how to talk about emotion, when to talk about emotion, whose emotions are centred, and which emotions are valued. Building on these themes and therapeutic XR as one form of EmotionTech, we propose strategies to legitimise concerns for emotional safety in design and research practice, legitimise knowers by recognising diverse perspectives and situated experiences, and leveraging ambiguity in design and training tools that foster reflexivity rather than closure. These strategies together reposition design responsibility in EmotionTech innovation and make visible its potential to cause emotional discomforts and harms.

著者
Thida Sachathep
The University of Sydney, Sydney, NSW, Australia
Kiran Ijaz
The University of Sydney, Sydney, NSW, Australia
Danielle Lottridge
University of Auckland, Auckland, New Zealand
Anna Janssen
The University of Sydney, Sydney, Australia
Jonas Fritsch
IT University of Copenhagen, Copenhagen, Denmark
Lina Goh
University of Sydney, Sydney, Australia
Philip Austin
HammondCare, Greenwich, NSW, Australia
Andrew Campbell
University of Sydney, Sydney, NSW, Australia
Barbara Barbosa Neves
Monash University , Melbourne, Victoria, Australia
Naseem Ahmadpour
The University of Sydney, Sydney, NSW, Australia
Affective and Goal-Oriented Factors of Relationship Formation in the Digital Therapeutic Alliance: A Longitudinal Study of Mental Health Chatbots
要旨

Mental health chatbots are increasingly deployed as scalable interventions, yet the relational mechanisms underpinning their effectiveness remain unclear. Drawing on prior research on digital therapeutic alliance, we operationalized a preliminary multi-dimensional instrument to capture perceptions of relational and functional dynamics in mental health chatbot interactions and conducted a four-week within-subjects study with 56 participants engaging with Wysa and Youper (two widely used CBT-based mental health chatbots). Through iterative factor refinement and regression modeling, we found that user-chatbot relationship formation is primarily driven by two factors: an affective factor, centered on emotional support, and a goal-oriented factor, centered on practical assistance. Conversational control contributed alongside these interpersonal factors, while trust (privacy, non-judgmentalness) and satisfaction emerged as correlated outcomes of supportive, effective interactions rather than standalone predictors. These findings advance models of the Digital Therapeutic Alliance by clarifying its underlying structure and highlighting design priorities for balancing empathy and efficacy in conversational agents.

受賞
Honorable Mention
著者
Zian Xu
University of Auckland, Auckland, New Zealand
YI-CHIEH LEE
National University of Singapore, Singapore, Singapore
Karolina Stasiak
University of Auckland, Auckland, New Zealand
Jim Warren
The University of Auckland, Auckland, New Zealand
Danielle Lottridge
University of Auckland, Auckland, New Zealand
InnerPond: Fostering Inter-Self Dialogue with a Multi-Agent Approach for Introspection
要旨

Introspection is central to identity construction and future planning, yet most digital tools approach the self as a unified entity. In contrast, Dialogical Self Theory (DST) views the self as composed of multiple internal perspectives, such as values, concerns, and aspirations, that can come into tension or dialogue with one another. Building on this view, we designed InnerPond, a research probe in the form of a multi-agent system that represents these internal perspectives as distinct LLM-based agents for introspection. Its design was shaped through iterative explorations of spatial metaphors, interaction scaffolding, and conversational orchestration, culminating in a shared spatial environment for organizing and relating multiple inner perspectives. In a user study with 17 young adults navigating career choices, participants engaged with the probe by co-creating inner voices with AI, composing relational inner landscapes, and orchestrating dialogue as observers and mediators, offering insight into how such systems could support introspection. Overall, this work offers design implications for AI-supported introspection tools that enable exploration of the self’s multiplicity.

著者
Hayeon Jeon
Seoul National University, Seoul, Korea, Republic of
Dakyeom Ahn
Seoul National University, Seoul, Korea, Republic of
Sunyu Pang
Seoul National University, Seoul, Korea, Republic of
Yunseo Choi
Seoul National University, Seoul, Korea, Republic of
Suhwoo Yoon
Seoul National University, Seoul, Korea, Republic of
Joonhwan Lee
Seoul National University, Seocho-gu, Seoul, Korea, Republic of
Eun-mee Kim
Seoul National University, Seoul, Korea, Republic of
Hajin Lim
Seoul National University , Seoul, Korea, Republic of
動画