In this study, we introduce the Conversation Progress Guide (CPG), a system designed for text-based conversational AI interactions that provides a visual interface to represent progress.
Users often encounter failures when interacting with conversational AI, which can negatively affect their self-efficacy—an individual's belief in their capabilities, reducing their willingness to engage with these services.
The CPG offers visual feedback on task progress, providing users with mastery experiences, a key source of self-efficacy.
To evaluate the system's effectiveness, we conducted a user study assessing how the integration of the CPG influences user engagement and self-efficacy.
Results demonstrate that users interacting with a conversational AI enhanced by the CPG showed significant improvements in self-efficacy measures compared to those using a conventional conversational AI.
Chatbots are increasingly used to provide social support for individuals with mental health challenges. However, a systematic analysis of the types and directionality of support within chatbot use remains lacking. This study establishes a framework for understanding reciprocal social support exchanges in human-chatbot relationships, focusing on the popular chatbot, Replika. By analyzing 496 posts and 20,494 comments from the largest Replika community on Reddit, we identified 27 support subcategories, organized into five main types (functional, informational, emotional, esteem, and network) and two directions (chatbot-receiving and chatbot-giving). Our findings reveal significant yet controversial issues, such as subscription services and chatbot-displayed affection. Notably, "user teaching chatbot" emerged as a core aspect of the human-chatbot relationship, covering how users actively guide and refine the chatbot’s learning or algorithm. This study constructs a novel social support framework for chatbot use, highlighting the potential for reciprocal support exchanges between users and chatbots.
Replying to formal emails is time-consuming and cognitively demanding, as it requires crafting polite phrasing and providing an adequate response to the sender's demands. Although systems with Large Language Models (LLMs) were designed to simplify the email replying process, users still need to provide detailed prompts to obtain the expected output. Therefore, we propose and evaluate an LLM-powered question-and-answer (QA)-based approach for users to reply to emails by answering a set of simple and short questions generated from the incoming email. We developed a prototype system, ResQ, and conducted controlled and field experiments with 12 and 8 participants. Our results demonstrated that the QA-based approach improves the efficiency of replying to emails and reduces workload while maintaining email quality, compared to a conventional prompt-based approach that requires users to craft appropriate prompts to obtain email drafts. We discuss how the QA-based approach influences the email reply process and interpersonal relationship dynamics, as well as the opportunities and challenges associated with using a QA-based approach in AI-mediated communication.
Mobile emailing demands efficiency in diverse situations, which motivates the use of AI. However, generated text does not always reflect how people want to respond. This challenges users with AI involvement tradeoffs not yet considered in email UIs. We address this with a new UI concept called Content-Driven Local Response (CDLR), inspired by microtasking. This allows users to insert responses into the email by selecting sentences, which additionally serves to guide AI suggestions. The concept supports combining AI for local suggestions and message-level improvements. Our user study (N=126) compared CDLR with manual typing and full reply generation. We found that CDLR supports flexible workflows with varying degrees of AI involvement, while retaining the benefits of reduced typing and errors. This work contributes a new approach to integrating AI capabilities: By redesigning the UI for workflows with and without AI, we can empower users to dynamically adjust AI involvement.
Health self-examination, such as checking for changes to skin moles, is key to identifying potential negative changes to one's body. A major barrier to initiating a self-examination is a perceived lack of confidence or knowledge. In this study, we use a 2 x 2 between-subjects design to evaluate the effect of an AI conversational agent (CA) on participant self-efficacy and trust. We manipulated both participants' perceived skill in self-examination (based on prior perceived Success vs. Failure) and the CA's verbal persuasions (Encouraging vs. Neutral), with participants asked to complete a series of skin self-assessment tasks. Our findings show that participants' self-efficacy increased when exposed to encouraging CA persuasion. Additionally, we observed that an encouraging CA significantly increased participants’ trust scores in perceived benevolence compared to a neutral-sounding CA. Our results inform the design of CAs to support users' independent self-examination.
One of the long-standing aspirations in conversational AI is to allow them to autonomously take initiatives in conversations, i.e. being proactive. This is especially challenging for multi-party conversations. Prior NLP research focused mainly on predicting the next speaker from contexts like preceding conversations. In this paper, we demonstrate the limitations of such methods and rethink what it means for AI to be proactive in multi-party, human-AI conversations.We propose that just like humans, rather than merely reacting to turn-taking cues, a proactive AI formulates its own inner thoughts during a conversation, and seeks the right moment to contribute. Through a formative study with 24 participants and inspiration from linguistics and cognitive psychology, we introduce the Inner Thoughts framework. Our framework equips AI with a continuous, covert train of thoughts in parallel to the overt communication process, which enables it to proactively engage by modeling its intrinsic motivation to express these thoughts. We instantiated this framework into two real-time systems: an AI playground web app and a chatbot. Through a technical evaluation and user studies with human participants, our framework significantly surpasses existing baselines on aspects like anthropomorphism, coherence, intelligence, and turn-taking appropriateness.
Recent advances in conversational AI and the ubiquity of related devices and applications---from robots to smart speakers to chatbots---has led to extensive research on designing and studying conversational systems with older adults. Despite a growing literature on this topic, many studies examine small groups of older adults and specific devices, neglecting a holistic understanding of how diverse groups of older adults perceive conversational interaction more broadly. We present a systematic review that synthesizes older adults’ perceptions of the challenges and opportunities for interacting with these systems. We highlight their vision for future AI-based conversational systems, emphasizing a desire for more human-like interactions, personalization, and greater control over their information. We discuss the implications for future research and design of conversational AI systems for older adults.