Expectation cues such as source labels, expertise signals, or identity-based indicators can bias how humans interpret and evaluate information. In high-stakes domains like healthcare, education, and law, such biases threaten the objectivity of decision-making. As LLMs increasingly provide decision support in these contexts, this study aims to examine whether LLMs exhibit expectation-driven bias akin to that of humans. Across two experiments (N = 1260), we manipulated expectations via priming statements and measured shifts in judgment scores. In both humans and LLMs, higher expectations led to more favorable evaluations for suggestions of equivalent quality, and greater mismatches between expectations and actual performance produced stronger judgment distortions. Notably, humans tended to adjust their evaluations unconsciously, whereas LLMs revised their outputs in a consistent and traceable manner. These findings reveal both shared sensitivities and distinct adjustment patterns, offering design insights for building expectation-aware AI systems that promote fair and transparent human–AI interaction.
ACM CHI Conference on Human Factors in Computing Systems