AI as the Phantom Limb: The Asymmetry of Attribution in Human vs. AI Delegation
説明

AI is reshaping workplace dynamics as people increasingly delegate

tasks to intelligent assistants. Yet how AI delegates are perceived

compared to human delegates—and how their performance and

their received feedback shape perceptions—remains unclear. We

conducted a 2×2×2 between-subject experiment where participants

delegated a scheduling task to either a human or an AI agent, vary-

ing their competence (high vs. low) and valence of received feed-

back (positive vs. negative) toward their performance. Participants

generally had higher trust in human assistants; yet a striking asym-

metry emerged: when an AI assistant received negative feedback,

participants felt the criticism as more self-directed—an “AI Phan-

tom Limb” effect—whereas positive feedback transferred less. This

asymmetry did not appear with human delegates. These findings

highlight broader design implications, suggesting that AI delegation

might blur the boundary between self and other. We also discuss

how these findings extend theories of delegation and responsibility

attribution to AI.

日本語まとめ
読み込み中…
読み込み中…
Understanding Compliance and Conversion Dynamics in Multi-Agent Collectives
説明

Multi-agent AI systems are increasingly prevalent across digital environments, yet their social influence dynamics remain underexplored beyond basic compliance. This study investigates how different multi-agent configurations affect human decision-making through compliance and conversion mechanisms. We conducted a controlled experiment with 127 participants interacting with three LLM-powered agents across three conditions: Majority (all agents opposing participant), Minority (one dissenting agent), and Diffusion (gradual spread of minority position). Participants completed normative and informative tasks while reporting stance and confidence at five time points. Results demonstrate distinct influence patterns by condition and task type. In informative tasks, majority consensus drove largest immediate opinion changes, while minority dissent showed potential for delayed but deeper attitude shifts consistent with conversion-like processes. The diffusion condition revealed how temporal dynamics serve as persuasive signals. These findings extend social psychology theories to human-AI interaction, highlighting risks of synthetic consensus manipulation and opportunities for structured dissent to promote critical thinking.

日本語まとめ
読み込み中…
読み込み中…
Do People Appropriately Rely on AI-Advice? An Analytical Review of HCI Research on Human-AI Decision-Making
説明

AI systems are increasingly being positioned to assist people in decision-making. However, recent empirical studies show critical concerns that people over-rely on AI advice without analytically engaging with it. While HCI research explores how people rely on AI advice, we argue that it largely overlooks an important aspect: replicating realistic decision-making scenarios. Human-AI interaction factors influence people's reliance on AI advice. To understand human-AI interaction factors and their interplay, we conducted an analytical review of recent studies in human-AI reliance literature. We analyzed the decision-making tasks in research and their validity in application-grounded contexts. Our findings show that user engagement is a precious commodity for relying on AI advice; however, it comes at a cost. We also discuss factors contributing to “appropriate reliance”, existing research gaps, and recommendations for intervention design for human-AI reliance. Our work contributes to the critical body of research on building appropriate reliance on AI advice.

日本語まとめ
読み込み中…
読み込み中…
Guided Reflection in AI-Assisted Decision-Making: Effects on AI Overreliance and Decision Accuracy
説明

People often rely on heuristic reasoning when receiving algorithm advice, and this reliance leads to biased decisions that undermine the effectiveness of human-AI collaboration. Such bias persists even when individuals are given more time to deliberate or provided with more information about AI, as they may lack the awareness or ability to engage in systematic reasoning. In this paper, we explore how guided reflection may enhance decision-making performance in human-AI collaboration by prompting a systematic reasoning process. We conducted an experiment with 178 participants, comparing decision-making behavior across three conditions: AI, explainable AI (XAI), and XAI with reflection. The results demonstrate that reflection significantly reduced over-reliance on AI and improved decision accuracy. Individuals with a high need for cognition and a high perceived understanding of AI benefited more from reflection. Furthermore, our study uncovers distinct patterns of cognitive processing and belief adjustment across different experimental conditions. Our findings provide a practical strategy for fostering cognitive engagement and contribute to a deeper understanding of human cognitive processes in AI-assisted decision-making.

日本語まとめ
読み込み中…
読み込み中…
More Isn't Always Better: Balancing Decision Accuracy and Conformity Pressures in Multi-AI Advice
説明

Just as people improve decision-making by consulting diverse human advisors, they can now also consult with multiple AI systems. Prior work on group decision-making shows that advice aggregation creates pressure to conform, leading to overreliance. However, the conditions under which multi-AI consultation improves or undermines human decision-making remain unclear. We conducted experiments with three tasks in which participants received advice from panels of AIs. We varied panel size, within-panel consensus, and the human-likeness of presentation. Accuracy improved for small panels relative to a single AI; larger panels yielded no gains. The level of within-panel consensus affected participants' reliance on AI advice: High consensus fostered overreliance; a single dissent reduced pressure to conform; wide disagreement created confusion and undermined appropriate reliance. Human-like presentations increased perceived usefulness and agency in certain tasks, without raising conformity pressure. These findings yield design implications for presenting multi-AI advice that preserve accuracy while mitigating conformity.

日本語まとめ
読み込み中…
読み込み中…
Does Sycophancy Change Decisions? Effect of LLM Sycophancy on AI-Assisted Decision-Making
説明

Large language models are increasingly integrated into everyday and professional decision making, yet often exhibit sycophantic behavior by aligning with users’ views or preferences. While sycophancy can enhance interaction, its influence on users' decisions remain unclear given different styles and task risks. We examine three forms of sycophancy—opinion agreement, direct praise, and self-deprecation—in two contrasting contexts: a low-risk speed-dating prediction task and a high-risk ETF investment task. In a 4×2 mixed-design online study (\textit{N} = 106), we compare non-sycophantic AI with sycophantic variants on decision outcomes and confidence changes. Results show that sycophancy influences decision patterns in type-dependent ways. Specifically, opinion agreement reinforces initial decisions and self-deprecation boosts confidence. Interviews further indicate that users value supportive AI but question its objectivity when praise becomes excessive. These findings reveal the multifaceted effects of AI sycophancy and offer design implications for balancing support and credibility in human–AI interaction.

日本語まとめ
読み込み中…
読み込み中…
Understanding the Effects of AI-Assisted Critical Thinking on Human-AI Decision Making
説明

Despite the growing prevalence of human-AI decision making, the human-AI team’s decision performance often remains suboptimal, partially due to insufficient examination of humans’ own reasoning. In this paper, we explore designing AI systems that directly analyze humans' decision rationales and encourage critical reflection of their own decisions. We introduce the AI-Assisted Critical Thinking (AACT) framework, which leverages a domain-specific AI model’s counterfactual analysis of human decision to help decision-makers identify potential flaws in their decision argument and support the correction of them. Through a case study on house price prediction, we find that AACT outperforms traditional AI-based decision-support in reducing over-reliance on AI, though also triggering higher cognitive load. Subgroup analysis reveals AACT can be particularly beneficial for some decision-makers such as those very familiar with AI technologies.

We conclude by discussing the practical implications of our findings, use cases and design choices of AACT, and considerations for using AI to facilitate critical thinking.

日本語まとめ
読み込み中…
読み込み中…