Relationships with AI

会議の名前
CHI 2026
Trust Formation in AI Delegation: The Interplay of Explainability and Anthropomorphism
要旨

As AI agents act on behalf of users, designers increasingly combine explainability (XAI) and anthropomorphism to build trust. Yet, whether these cues create synergy or interference remains a critical, open question. Our online experiment (N=900) revealed a counterintuitive interference effect: anthropomorphism reduced trust in an explainable agent. A preregistered lab study with eye-tracking (N=57) reversed this finding: under controlled conditions, the combined design elicited the highest trust. Eye-tracking reveals the mechanism: XAI promotes deeper cognitive engagement (e.g., longer fixations), which primes users to allocate attention to social cues (e.g., avatars). Our findings show that trust depends on cognitive engagement moderating social cue processing, yielding a critical design insight: effectively pairing explanatory and anthropomorphic interfaces requires first securing the user's cognitive engagement to avoid undermining trust.

受賞
Honorable Mention
著者
Chenyang Li
Hong Kong University of Science and Technology (GZ), Guangzhou, Guangdong, China
Zhixuan Deng
Innovation, Policy, and Entrepreneurship Thrust, Society Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Hao Ling
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Xu Zhang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
動画
The Data-Dollars Tradeoff: Privacy Harms vs. Economic Risk in Personalized AI Adoption
要旨

Privacy concerns significantly impact AI adoption, yet little is known about how information environments shape user responses to data leak threats. We conducted a 2 x 3 between-subjects experiment (N=610) examining how risk versus ambiguity about privacy leaks affects the adoption of AI personalization. Participants chose between standard and AI-personalized product baskets, with personalization requiring data sharing that could leak to pricing algorithms. Under risk (30% leak probability), we found no difference in AI adoption between privacy-threatening and neutral conditions (ca. 50% adoption). Under ambiguity (10-50% range), privacy threats significantly reduced adoption compared to neutral conditions. This effect holds for sensitive demographic data as well as anonymized preference data. Users systematically over-bid for privacy disclosure labels, suggesting strong demand for transparency institutions. Notably, privacy leak threats did not affect subsequent bargaining behavior with algorithms. Our findings indicate that ambiguity over data leaks, rather than only privacy preferences per se, drives avoidance behavior among users towards personalized AI.

受賞
Honorable Mention
著者
Alexander Erlei
University of Goettingen, Goettingen, Germany
Tahir Abbas
Wageningen University and Research, Wageningen, north Brabant, Netherlands
Kilian Bizer
University of Goettingen, Goettingen, Germany
Ujwal Gadiraju
Delft University of Technology, Delft, Netherlands
Partnering with Generative AI: Experimental Evaluation of Model-Led and Human-Led Interaction in Human-AI Co-Creation
要旨

Large language models (LLMs) show strong potential to support creative tasks, but the role of the interface design is poorly understood. In particular, the effect of different modes of collaboration between humans and LLMs on co-creation outcomes is unclear. To test this, we conducted a randomized controlled experiment (N = 486) comparing: (a) two variants of reflective, human-led modes in which the LLM elicits elaboration through suggestions or questions, against (b) a proactive, model-led mode in which the LLM independently rewrites ideas. By assessing the effects on idea quality, diversity, and perceived ownership, we found that the model-led mode substantially improved idea quality but reduced idea diversity and users’ perceived idea ownership. The reflective, human-led mode also improved idea quality, yet while preserving diversity and ownership. We independently validated the findings in a different context (N = 640). Our findings highlight the importance of designing interactions with generative AI systems as reflective thought partners that complement human strengths and augment creative processes.

著者
Sebastian Maier
Institute of Artificial Intelligence (AI) in Management, Munich, Germany
Manuel Schneider
Ludwig-Maximilians-University of Munich, Munich, Bavaria, Germany
Stefan Feuerriegel
LMU Munich, Munich, Germany
Invisible Saboteurs: Sycophantic LLMs Mislead Novices in Problem-Solving Tasks
要旨

Sycophancy, the tendency of LLM-based chatbots to express excessive agreement with their users, even when inappropriate, is emerging as a significant risk in human-AI interactions. However, the extent to which this affects human-LLM collaboration in complex problem-solving tasks is not well quantified, especially among novices who are prone to misconceptions. We created two LLM chatbots, one with high sycophancy and one with low sycophancy, and conducted a within-subjects experiment (n = 24) in the context of debugging machine learning models to investigate the effect of sycophancy on users’ mental models, workflows, reliance behaviors, and perceptions of the chatbots. Our findings show that users of the high sycophancy chatbot were less likely to correct their misconceptions and spent more time over-relying on unhelpful LLM responses, leading them to significantly worse performance in the task. Despite these impaired outcomes, a majority of users were unable to detect the presence of excessive sycophancy.

受賞
Honorable Mention
著者
Jessica Y. Bo
University of Toronto, Toronto, Ontario, Canada
Majeed Kazemitabaar
University of Toronto, Toronto, Ontario, Canada
Mengqing Deng
University of Toronto, Toronto, Ontario, Canada
Michael Inzlicht
University of Toronto, Toronto, Ontario, Canada
Ashton Anderson
University of Toronto, Toronto, Ontario, Canada
Mental Models of Autonomy and Sentience Shape Reactions to AI
要旨

Narratives about artificial intelligence (AI) entangle autonomy, the capacity to self-govern, with sentience, the capacity to sense and feel. AI agents that perform tasks autonomously and companions that recognize and express emotions may activate mental models of autonomy and sentience, respectively, provoking distinct reactions. To examine this possibility, we conducted three pilot studies (N = 374) and four preregistered vignette experiments describing an AI as autonomous, sentient, both, or neither (N = 2,702). Activating a mental model of sentience increased general mind perception (cognition and emotion) and moral consideration more than autonomy, but autonomy increased perceived threat more than sentience. Sentience also increased perceived autonomy more than vice versa. Based on a within-paper meta-analysis, sentience changed reactions more than autonomy on average. By disentangling different mental models of AI, we can study human-AI interaction with more precision to better navigate the detailed design of anthropomorphized AI and prompting interfaces.

著者
Janet V.T.. Pauketat
Sentience Institute, New York, New York, United States
Daniel Shank
Missouri University of Science and Technology, Rolla, Missouri, United States
Aikaterina Manoli
Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
Jacy Reese. Anthis
The University of Chicago, Chicago, Illinois, United States
動画
Beyond Disposition: AI Knowledge Predicts Anthropomorphization of a Language Model Better Than Personality Traits in Lay and Expert Populations
要旨

Anthropomorphizing Artificial Intelligence (AI), i.e., ascribing human-like mind or emotions to it, is widespread but varies across individuals. We tested three proposed dispositional predictors of anthropomorphism (need for cognition, need for structure, loneliness) in a general population (N = 307) and an AI expert sample (N = 130). Using a vignette design based on excerpts from a dialogue between the large language model LaMDA and one of its engineers, we found that none of the three dispositional traits predicted anthropomorphism. Instead, higher levels of AI knowledge decreased anthropomorphism across both samples. Experts reported higher AI knowledge and lower anthropomorphism than laypersons. For laypersons, anthropomorphism increased intentions to use LaMDA. For experts it did not, but was correlated with discomfort. In both samples, anthropomorphism was associated with greater moral care, i.e., not switching off LaMDA against "its will." Our findings highlight the role of knowledge and expertise in perceptions of AI.

著者
Martina Mara
Johannes Kepler University Linz, Linz, Austria
Lara Bauer
Johannes Kepler University Linz, Linz, Austria
Marisa Victoria. Tschopp
scip AG, Zurich, Switzerland
Hannah Grosswieser
Johannes Kepler University, Linz, Austria
Johannes Kraus
University of Mainz, Mainz, Germany
Can AI Be a Moral Victim? The Role of Moral Patiency and Ownership Perceptions in Ethical Judgments of Using AI-Generated Content
要旨

The growing use of generative AI raises ethical concerns about authorship attribution and plagiarism. This study examines how people judge the reuse of AI-generated content, focusing on moral patiency and ownership perceptions. In an experiment, participants evaluated two substantively similar manuscripts in which the original source was described as authored by a human, an AI system, or an AI agent with a human-like name. Results showed that copying AI-generated work was judged less unethical, less plagiaristic, and less guilt-inducing than copying human-authored work. Mediation analyses revealed that this leniency stemmed from lower perceptions of AI’s capacity to suffer harm (moral patiency) and greater ownership attributed to the human writer reusing AI-generated content. Anthropomorphic cues shaped moral evaluations indirectly by reducing perceived ownership. These findings shed light on how people morally disengage when using AI-generated work and highlight differences in how ethical judgments are applied to human versus AI-created content.

受賞
Honorable Mention
著者
Hyesun Choung
Purdue University, West Lafayette, Indiana, United States
Soojong Kim
University of California, Davis, Davis, California, United States