Trust Formation in AI Delegation: The Interplay of Explainability and Anthropomorphism

要旨

As AI agents act on behalf of users, designers increasingly combine explainability (XAI) and anthropomorphism to build trust. Yet, whether these cues create synergy or interference remains a critical, open question. Our online experiment (N=900) revealed a counterintuitive interference effect: anthropomorphism reduced trust in an explainable agent. A preregistered lab study with eye-tracking (N=57) reversed this finding: under controlled conditions, the combined design elicited the highest trust. Eye-tracking reveals the mechanism: XAI promotes deeper cognitive engagement (e.g., longer fixations), which primes users to allocate attention to social cues (e.g., avatars). Our findings show that trust depends on cognitive engagement moderating social cue processing, yielding a critical design insight: effectively pairing explanatory and anthropomorphic interfaces requires first securing the user's cognitive engagement to avoid undermining trust.

受賞
Honorable Mention
著者
Chenyang Li
Hong Kong University of Science and Technology (GZ), Guangzhou, Guangdong, China
Zhixuan Deng
Innovation, Policy, and Entrepreneurship Thrust, Society Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Hao Ling
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Xu Zhang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
動画

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Relationships with AI

P1 - Room 130
7 件の発表
2026-04-13 20:15:00
2026-04-13 21:45:00