Conversational AI, Agency and Control

会議の名前
CHI 2026
When Nobody Around Is Real: Exploring Public Opinions and User Experiences On the Multi-Agent AI Social Platform
要旨

Powered by large language models, a new genre of multi-agent social platforms has emerged. Apps such as Social.AI deploy numerous AI agents that emulate human behavior, creating unprecedented bot-centric social networks. Yet, existing research has predominantly focused on one-on-one chatbots, leaving multi-agent AI platforms underexplored. To bridge this gap, we took Social.AI as a case study and performed a two-stage investigation: (i) content analysis of 883 user comments; (ii) a 7-day diary study with 20 participants to document their firsthand platform experiences. While public discourse expressed greater skepticism, the diary study found that users did project a range of social expectations onto the AI agents. While some user expectations were met, the AI-dominant social environment introduces distinct problems, such as attention overload and homogenized interaction. These tensions signal a future where AI functions not merely as a tool or an anthropomorphized actor, but as the dominant medium of sociality itself--a paradigm shift that foregrounds new forms of architected social life.

著者
Qiufang Yu
Fudan University, Shanghai, Shanghai, China
Mengmeng Wu
the University of Chicago, chicago, Illinois, United States
Xingyu Lan
Fudan University, Shanghai, Shanghai, China
Who Controls the Conversation? User Perspectives On Generative AI (LLM) System Prompts
要旨

System prompts---instructions that shape the behaviour of generative AI systems---strongly influence system outputs and users' experiences. They define the model's guidelines, `personality', and guardrails, taking precedence over user inputs. Despite their influence, transparency is limited: system prompts are generally not made public and most platforms instruct models to conceal them, leaving users disconnected from and unaware of a key mechanism guiding and governing their AI interactions. This paper argues that system prompts warrant explicit, user-centred design attention and, focusing on large language models (LLMs), asks: what do system prompts contain, how do end-users perceive them, and what do these perceptions offer for design and governance practice? Our results reveal user perspectives on: the benefits and risks of system prompts; the values they prefer to be associated with prompt-design; their levels of comfort with different types of prompts; and degrees of transparency and user control regarding prompt content. From these findings emerge considerations for how designers can better align system prompt mechanisms with user expectations and preferences over these mechanisms that directly shape how generative AI systems behave.

受賞
Best Paper
著者
Anna Neumann
University Duisburg-Essen, Duisburg, Germany
Yulu Pi
University Duisburg-Essen, Duisburg, Germany
Jatinder Singh
University Duisburg-Essen, Duisburg, Germany
ScamPilot: Simulating Conversations with LLMs to Protect Against Online Scams
要旨

Fraud continues to proliferate online, from phishing and ransomware to impersonation scams. Yet automated prevention approaches adapt slowly and may not reliably protect users from falling prey to new scams. To better combat online scams, we developed \ScamPilot, a conversational interface that inoculates users against scams through simulation, dynamic interaction, and real-time feedback. ScamPilot simulates scams with two large language model-powered agents: a scammer and a target. Users must help the target defend against the scammer by providing real-time advice. Through a between-subjects study (N=150) with one control and three experimental conditions, we find that blending advice-giving with multiple choice questions significantly increased scam recognition (+8%) without decreasing wariness towards legitimate conversations. Users’ response efficacy and change in self-efficacy was also 9% and 19% higher, respectively. Qualitatively, we find that users more frequently provided action-oriented advice over urging caution or providing emotional support. Overall, ScamPilot demonstrates the potential for inter-agent conversational user interfaces to augment learning.

著者
Owen M. Hoffman
Swarthmore College, Swarthmore, Pennsylvania, United States
Kangze Peng
Swarthmore College, Swarthmore, Pennsylvania, United States
Sajid Kamal
Swarthmore College, Swarthmore, Pennsylvania, United States
Zehua You
Swarthmore College, Swarthmore, Pennsylvania, United States
Sukrit Venkatagiri
Swarthmore College, Swarthmore, Pennsylvania, United States
Does My Chatbot Have an Agenda? Understanding Human and AI Agency in Human-Human-like Chatbot Interaction
要旨

As AI chatbots shift from tools to companions, critical questions arise: who controls the conversation in human-AI chatrooms? This paper explores perceived human and AI agency in sustained conversation. We report a month-long longitudinal study with 22 adults who chatted with "Day", an LLM companion we built, followed by a semi-structured interview with post-hoc elicitation of notable moments, cross-participant chat reviews, and a 'strategy reveal' disclosing "Day's" goal for each conversation. We discover agency manifests as an emergent, shared experience: as participants set boundaries and the AI steered intentions, control was co-constructed turn-by-turn. We introduce a 3-by-4 framework mapping actors (Human, AI, Hybrid) by their action (Intention, Execution, Adaptation, Delimitation), modulated by individual and environmental factors. We argue for translucent design (transparency-on-demand) and provide implications for agency self-aware conversational agents.

受賞
Honorable Mention
著者
Bhada Yun
ETH Zürich, Zürich, Switzerland
Evgenia Taranova
University of Bergen, Bergen, Norway
April Yi. Wang
ETH Zurich, Zurich, Switzerland
動画
Polite But Boring? Trade-offs Between Engagement and Psychological Reactance to Chatbot Feedback Styles
要旨

As conversational agents become increasingly common in behaviour change interventions, understanding optimal feedback delivery mechanisms becomes increasingly important. However, choosing a style that both lessens psychological reactance (perceived threats to freedom) while simultaneously eliciting feelings of surprise and engagement represents a complex design problem. We explored how three different feedback styles: 'Direct', 'Politeness', and 'Verbal Leakage' (slips or disfluencies to reveal a desired behaviour) affect user perceptions and behavioural intentions. Matching expectations from literature, the 'Direct' chatbot led to lower behavioural intentions and higher reactance, while the 'Politeness' chatbot evoked higher behavioural intentions and lower reactance. However, 'Politeness' was also seen as unsurprising and unengaging by participants. In contrast, 'Verbal Leakage' evoked reactance, yet also elicited higher feelings of surprise, engagement, and humour. These findings highlight that effective feedback requires navigating trade-offs between user reactance and engagement, with novel approaches such as 'Verbal Leakage' offering promising alternative design opportunities.

著者
Samuel Rhys. Cox
Aalborg University, Aalborg, Denmark
Joel Wester
University of Copenhagen, Copenhagen, Denmark
Niels van Berkel
Aalborg University, Aalborg, Denmark
Digital Companionship: Overlapping Uses of AI Companions and AI Assistants
要旨

Large language models are increasingly used for both task-based assistance and social companionship, yet research has typically focused on one or the other. Drawing on a survey (N = 202) and 30 interviews with high-engagement ChatGPT and Replika users, we characterize digital companionship as an emerging form of human-AI relationship. With both systems, users were drawn to humanlike qualities, such as emotional resonance and personalized responses, and non-humanlike qualities, such as constant availability and inexhaustible tolerance. This led to fluid chatbot uses, such as Replika as a writing assistant and ChatGPT as an emotional confidant, despite their distinct branding. However, we observed challenging tensions in digital companionship dynamics: participants grappled with bounded personhood, forming deep attachments while denying chatbots "real" human qualities, and struggled to reconcile chatbot relationships with social norms. These dynamics raise questions for the design of digital companions and the rise of hybrid, general-purpose AI systems.

著者
Aikaterina Manoli
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
Janet V.T.. Pauketat
Sentience Institute, New York, New York, United States
Ali Ladak
University of Edinburgh, Edinburgh, Scotland, United Kingdom
Hayoun Noh
University of Oxford, Oxford, United Kingdom
Angel Hsing-Chi Hwang
University of Southern California, Los Angeles, California, United States
Jacy Reese. Anthis
University of Chicago, Chicago, Illinois, United States
Feedback by Design: Understanding and Overcoming User Feedback Barriers in Conversational Agents
要旨

High-quality feedback is essential for effective human–AI interaction. It bridges knowledge gaps, corrects digressions, and shapes system behavior; both during interaction and throughout model development. Yet despite its importance, human feedback to AI is often infrequent and low quality. This gap motivates a critical examination of human feedback during interactions with AIs. To understand and overcome the challenges preventing users from giving high-quality feedback, we conducted two studies examining feedback dynamics between humans and conversational agents (CAs). Our formative study, through the lens of Grice’s maxims, identified four Feedback Barriers---Common Ground, Verifiability, Communication, and Informativeness---that prevent high-quality feedback by users. Building on these findings, we derive three design desiderata and show that systems incorporating scaffolds aligned with these desiderata enabled users to provide higher-quality feedback. Finally, we detail a call for action to the broader AI community for advances in Large Language Models capabilities to overcome Feedback Barriers.

著者
Nikhil Sharma
Johns Hopkins University, Baltimore, Maryland, United States
Zheng Zhang
Adobe Inc., San Jose, California, United States
Daniel Lee
Adobe Inc., San Jose , California, United States
Namita Krishnan
Adobe Inc., San Jose, California, United States
Guang-Jie Ren
Adobe Inc., San Jose, California, United States
Ziang Xiao
Johns Hopkins University, Baltimore, Maryland, United States
Yunyao Li
Adobe Inc., San Jose, California, United States