Human-AI Decision Making

会議の名前
CHI 2026
AI as the Phantom Limb: The Asymmetry of Attribution in Human vs. AI Delegation
要旨

AI is reshaping workplace dynamics as people increasingly delegate tasks to intelligent assistants. Yet how AI delegates are perceived compared to human delegates—and how their performance and their received feedback shape perceptions—remains unclear. We conducted a 2×2×2 between-subject experiment where participants delegated a scheduling task to either a human or an AI agent, vary- ing their competence (high vs. low) and valence of received feed- back (positive vs. negative) toward their performance. Participants generally had higher trust in human assistants; yet a striking asym- metry emerged: when an AI assistant received negative feedback, participants felt the criticism as more self-directed—an “AI Phan- tom Limb” effect—whereas positive feedback transferred less. This asymmetry did not appear with human delegates. These findings highlight broader design implications, suggesting that AI delegation might blur the boundary between self and other. We also discuss how these findings extend theories of delegation and responsibility attribution to AI.

著者
Yu-Sheng Chen
National Chengchi University, Taipei, Taipei, Taiwan
Yoyo Tsung-Yu Hou
National Chengchi University, Taipei city, Taiwan
Yu-Hsuan Lin
National Chengchi university, Taipei, Taipei, Taiwan
Joshua Mu-En. Liu
National Chengchi University, Taipei, Taiwan
WeiRong Chen
National Chengchi University, Taipei, Taiwan
Yihsiu Chen
College of Communication, NCCU, Taipei, Taiwan
Understanding Compliance and Conversion Dynamics in Multi-Agent Collectives
要旨

Multi-agent AI systems are increasingly prevalent across digital environments, yet their social influence dynamics remain underexplored beyond basic compliance. This study investigates how different multi-agent configurations affect human decision-making through compliance and conversion mechanisms. We conducted a controlled experiment with 127 participants interacting with three LLM-powered agents across three conditions: Majority (all agents opposing participant), Minority (one dissenting agent), and Diffusion (gradual spread of minority position). Participants completed normative and informative tasks while reporting stance and confidence at five time points. Results demonstrate distinct influence patterns by condition and task type. In informative tasks, majority consensus drove largest immediate opinion changes, while minority dissent showed potential for delayed but deeper attitude shifts consistent with conversion-like processes. The diffusion condition revealed how temporal dynamics serve as persuasive signals. These findings extend social psychology theories to human-AI interaction, highlighting risks of synthetic consensus manipulation and opportunities for structured dissent to promote critical thinking.

著者
Soohwan Lee
UNIST, Ulsan, Korea, Republic of
Kyungho Lee
Ulsan National Institute of Science and Technology (UNIST), Ulsan, Korea, Republic of
Do People Appropriately Rely on AI-Advice? An Analytical Review of HCI Research on Human-AI Decision-Making
要旨

AI systems are increasingly being positioned to assist people in decision-making. However, recent empirical studies show critical concerns that people over-rely on AI advice without analytically engaging with it. While HCI research explores how people rely on AI advice, we argue that it largely overlooks an important aspect: replicating realistic decision-making scenarios. Human-AI interaction factors influence people's reliance on AI advice. To understand human-AI interaction factors and their interplay, we conducted an analytical review of recent studies in human-AI reliance literature. We analyzed the decision-making tasks in research and their validity in application-grounded contexts. Our findings show that user engagement is a precious commodity for relying on AI advice; however, it comes at a cost. We also discuss factors contributing to “appropriate reliance”, existing research gaps, and recommendations for intervention design for human-AI reliance. Our work contributes to the critical body of research on building appropriate reliance on AI advice.

受賞
Honorable Mention
著者
Muhammad Raees
Rochester Institute of Technology, Rochester, New York, United States
Vassilis-Javed Khan
independent, Brussels, Belgium
Ioanna Lykourentzou
Utrecht University, Utrecht, Netherlands
Konstantinos Papangelis
Rochester Institute of Technology, Rochester, New York, United States
Guided Reflection in AI-Assisted Decision-Making: Effects on AI Overreliance and Decision Accuracy
要旨

People often rely on heuristic reasoning when receiving algorithm advice, and this reliance leads to biased decisions that undermine the effectiveness of human-AI collaboration. Such bias persists even when individuals are given more time to deliberate or provided with more information about AI, as they may lack the awareness or ability to engage in systematic reasoning. In this paper, we explore how guided reflection may enhance decision-making performance in human-AI collaboration by prompting a systematic reasoning process. We conducted an experiment with 178 participants, comparing decision-making behavior across three conditions: AI, explainable AI (XAI), and XAI with reflection. The results demonstrate that reflection significantly reduced over-reliance on AI and improved decision accuracy. Individuals with a high need for cognition and a high perceived understanding of AI benefited more from reflection. Furthermore, our study uncovers distinct patterns of cognitive processing and belief adjustment across different experimental conditions. Our findings provide a practical strategy for fostering cognitive engagement and contribute to a deeper understanding of human cognitive processes in AI-assisted decision-making.

著者
Shanshan Li
Wuhan University, Wuhan, Hubei, China
Jingwei Li
Shenzhen MSU-BIT University, Shenzhen, Guangdong, China
Huiran Li
Shanghai Customs University, Shanghai, China, China
Hongwei Zhu
University of Massachusetts Lowell, Lowell , Massachusetts, United States
Xitong Li
Artificial Intelligence Research Institute, Shenzhen MSU-BIT University, Shenzhen, China
More Isn't Always Better: Balancing Decision Accuracy and Conformity Pressures in Multi-AI Advice
要旨

Just as people improve decision-making by consulting diverse human advisors, they can now also consult with multiple AI systems. Prior work on group decision-making shows that advice aggregation creates pressure to conform, leading to overreliance. However, the conditions under which multi-AI consultation improves or undermines human decision-making remain unclear. We conducted experiments with three tasks in which participants received advice from panels of AIs. We varied panel size, within-panel consensus, and the human-likeness of presentation. Accuracy improved for small panels relative to a single AI; larger panels yielded no gains. The level of within-panel consensus affected participants' reliance on AI advice: High consensus fostered overreliance; a single dissent reduced pressure to conform; wide disagreement created confusion and undermined appropriate reliance. Human-like presentations increased perceived usefulness and agency in certain tasks, without raising conformity pressure. These findings yield design implications for presenting multi-AI advice that preserve accuracy while mitigating conformity.

著者
Yuta Tsuchiya
The University of Tokyo, Tokyo, Japan
Yukino Baba
The University of Tokyo, Tokyo, Japan
動画
Does Sycophancy Change Decisions? Effect of LLM Sycophancy on AI-Assisted Decision-Making
要旨

Large language models are increasingly integrated into everyday and professional decision making, yet often exhibit sycophantic behavior by aligning with users’ views or preferences. While sycophancy can enhance interaction, its influence on users' decisions remain unclear given different styles and task risks. We examine three forms of sycophancy—opinion agreement, direct praise, and self-deprecation—in two contrasting contexts: a low-risk speed-dating prediction task and a high-risk ETF investment task. In a 4×2 mixed-design online study (\textit{N} = 106), we compare non-sycophantic AI with sycophantic variants on decision outcomes and confidence changes. Results show that sycophancy influences decision patterns in type-dependent ways. Specifically, opinion agreement reinforces initial decisions and self-deprecation boosts confidence. Interviews further indicate that users value supportive AI but question its objectivity when praise becomes excessive. These findings reveal the multifaceted effects of AI sycophancy and offer design implications for balancing support and credibility in human–AI interaction.

著者
Zejian Li
Zhejiang University, Ningbo, Zhejiang, China
Jiaman Pan
Zhejiang University, Ningbo, Zhejiang, China
Qi Liu
Zhejiang University, Ningbo, Zhejiang, China
Yuning Xi
Beijing University of Posts and Telecommunications, Beijing, China
Yixiang Zhou
Zhejiang University, Ningbo, Zhejiang, China
Yike Jin
Zhejiang University, HangZhou, Zhejiang, China
Rongjie Mao
South China University of Technology, Guangzhou, Guangdong, China
Pei Chen
Zhejiang University, Hangzhou, China
Understanding the Effects of AI-Assisted Critical Thinking on Human-AI Decision Making
要旨

Despite the growing prevalence of human-AI decision making, the human-AI team’s decision performance often remains suboptimal, partially due to insufficient examination of humans’ own reasoning. In this paper, we explore designing AI systems that directly analyze humans' decision rationales and encourage critical reflection of their own decisions. We introduce the AI-Assisted Critical Thinking (AACT) framework, which leverages a domain-specific AI model’s counterfactual analysis of human decision to help decision-makers identify potential flaws in their decision argument and support the correction of them. Through a case study on house price prediction, we find that AACT outperforms traditional AI-based decision-support in reducing over-reliance on AI, though also triggering higher cognitive load. Subgroup analysis reveals AACT can be particularly beneficial for some decision-makers such as those very familiar with AI technologies. We conclude by discussing the practical implications of our findings, use cases and design choices of AACT, and considerations for using AI to facilitate critical thinking.

受賞
Honorable Mention
著者
Harry Yizhou. Tian
Purdue University, West Lafayette, Indiana, United States
Hasan Amin
Purdue University, West Lafayette, Indiana, United States
Ming Yin
Purdue University, West Lafayette, Indiana, United States