From Expectation to Evaluation: Expectation Cues Systematically Bias LLM and Human Judgment

要旨

Expectation cues such as source labels, expertise signals, or identity-based indicators can bias how humans interpret and evaluate information. In high-stakes domains like healthcare, education, and law, such biases threaten the objectivity of decision-making. As LLMs increasingly provide decision support in these contexts, this study aims to examine whether LLMs exhibit expectation-driven bias akin to that of humans. Across two experiments (N = 1260), we manipulated expectations via priming statements and measured shifts in judgment scores. In both humans and LLMs, higher expectations led to more favorable evaluations for suggestions of equivalent quality, and greater mismatches between expectations and actual performance produced stronger judgment distortions. Notably, humans tended to adjust their evaluations unconsciously, whereas LLMs revised their outputs in a consistent and traceable manner. These findings reveal both shared sensitivities and distinct adjustment patterns, offering design insights for building expectation-aware AI systems that promote fair and transparent human–AI interaction.

著者
Sun yiteng
The Hong Kong Polytechnic University, Hong Kong, China
Danica Dillion
University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States
Kurt Gray
The Ohio State University, Columbus, Ohio, United States
Mengtao Lyu
The Hong Kong Polytechnic University, Hong Kong, China
Zhuorui Zhang
The Hong Kong Polytechnic University, Hong Kong, China
Fan Li
The Hong Kong Polytechnic University, Hong Kong, China

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Trust and Perception in AI Systems

P1 - Room 118
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00