Persona-based, empathetic approaches can foster sustainable long-term user-agent engagement in aging-in-place contexts. We present PersonaBot, a persona-driven persuasive agent built on a Dual-Persona framework that constructs user personas and generates culturally diverse, gender- and personality-varied agent personas, pairing users with preferred agent personas and adapting them over time. In an eight-week field deployment (8 participants; 1005 participant messages; 2432 agent messages), PersonaBot significantly increased perceived empathy, slowed engagement decline relative to a non-persona baseline, and elicited more elaborative interactions. Effectiveness varied with users’ technological self-efficacy, autonomy preferences, cultural identity, and social patterns, underscoring heterogeneous persona needs. Contrary to our initial assumptions, participants sometimes chose cross-cultural agents for perceived professionalism (over demographic similarity) and favored teacher-like personas balancing authority and warmth. Many framed the agent as a co-pilot rather than a caregiver replacement and engaged selectively, indicating agent personas should respect autonomy and invite—rather than demand—interaction.
Existing conversational search systems can synthesize information into responses, but they lack principled ways to adapt response formats to users' cognitive states. This paper investigates whether aligning format and distance, which involves matching information granularity and media to users' psychological distance, improves user experience. In a between-subjects experiment (N=464) on travel planning, we crossed two distance dimensions (temporal/spatial × near/far) with four formats varying in granularity (abstract/concrete) and media (text/image-and-text). The experiment established that format-distance alignment reduced users' risk perceptions while increasing decision confidence, perceptions of information usefulness, ease of use, enjoyment, and credibility, and adoption intentions. Concrete formats imposed higher cognitive load, but yielded productive effort when matched to near-distance tasks. Images enhanced concrete but not abstract text, suggesting multimedia benefits depend on complementarity. These findings establish format-distance alignment as a distinctive and important design dimension, enabling systems to tailor response formats to users' psychological distance.
Navigating health questions can be daunting in the modern information landscape. Large language models (LLMs) may provide tailored, accessible information, but also risk being inaccurate, biased or misleading. We present insights from 5 mixed-methods studies (total N=261), examining how people interact with LLMs for their own health questions. Qualitative studies revealed the importance of context-seeking in conversational AIs to elicit specific details a person may not volunteer or know to share. Context-seeking by LLMs was valued by participants, even if it meant deferring an answer for several turns. Incorporating these insights, we developed a “Wayfinding AI” to proactively solicit context. In two randomized, blinded studies, participants rated the Wayfinding AI as more helpful, relevant, and tailored to their concerns compared to a baseline AI. These results demonstrate the strong impact of proactive context-seeking on conversational dynamics, and suggest design patterns for conversational AI to help navigate health topics.
Emotional harm and discomfort in therapeutic extended realities (XR) remains underexamined, even as immersive tools are increasingly deployed in healthcare contexts. We frame therapeutic XR as EmotionTech and reflect on 12 cases from 9 researchers and designers through interviews and workshops. We locate four concerns for emotional harm and identify ways to address them: how to talk about emotion, when to talk about emotion, whose emotions are centred, and which emotions are valued. Building on these themes and therapeutic XR as one form of EmotionTech, we propose strategies to legitimise concerns for emotional safety in design and research practice, legitimise knowers by recognising diverse perspectives and situated experiences, and leveraging ambiguity in design and training tools that foster reflexivity rather than closure. These strategies together reposition design responsibility in EmotionTech innovation and make visible its potential to cause emotional discomforts and harms.
Mental health chatbots are increasingly deployed as scalable interventions, yet the relational mechanisms underpinning their effectiveness remain unclear. Drawing on prior research on digital therapeutic alliance, we operationalized a preliminary multi-dimensional instrument to capture perceptions of relational and functional dynamics in mental health chatbot interactions and conducted a four-week within-subjects study with 56 participants engaging with Wysa and Youper (two widely used CBT-based mental health chatbots). Through iterative factor refinement and regression modeling, we found that user-chatbot relationship formation is primarily driven by two factors: an affective factor, centered on emotional support, and a goal-oriented factor, centered on practical assistance. Conversational control contributed alongside these interpersonal factors, while trust (privacy, non-judgmentalness) and satisfaction emerged as correlated outcomes of supportive, effective interactions rather than standalone predictors. These findings advance models of the Digital Therapeutic Alliance by clarifying its underlying structure and highlighting design priorities for balancing empathy and efficacy in conversational agents.
Introspection is central to identity construction and future planning, yet most digital tools approach the self as a unified entity. In contrast, Dialogical Self Theory (DST) views the self as composed of multiple internal perspectives, such as values, concerns, and aspirations, that can come into tension or dialogue with one another. Building on this view, we designed InnerPond, a research probe in the form of a multi-agent system that represents these internal perspectives as distinct LLM-based agents for introspection. Its design was shaped through iterative explorations of spatial metaphors, interaction scaffolding, and conversational orchestration, culminating in a shared spatial environment for organizing and relating multiple inner perspectives. In a user study with 17 young adults navigating career choices, participants engaged with the probe by co-creating inner voices with AI, composing relational inner landscapes, and orchestrating dialogue as observers and mediators, offering insight into how such systems could support introspection. Overall, this work offers design implications for AI-supported introspection tools that enable exploration of the self’s multiplicity.