Emulating Aggregate Human Choice Behavior and Biases with GPT Conversational Agents

要旨

Cognitive biases often shape human decisions. While large language models (LLMs) have been shown to reproduce well-known biases, a more critical question is whether LLMs can predict biases at the individual level and emulate the dynamics of biased human behavior when contextual factors, such as cognitive load, interact with these biases. We adapted three well-established decision scenarios into a conversational setting and conducted a human experiment (N=1100). Participants engaged with a chatbot that facilitates decision-making through simple or complex dialogues. Results revealed robust biases. To evaluate how LLMs emulate human decision-making under similar interactive conditions, we used participant demographics and dialogue transcripts to simulate these conditions with LLMs based on GPT-4 and GPT-5. The LLMs reproduced human biases with precision. We found notable differences between models in how they aligned human behavior. This has important implications for designing and evaluating adaptive, bias-aware LLM-based AI systems in interactive contexts.

受賞
Honorable Mention
著者
Stephen Pilli
University College Dublin, Dublin, Ireland
Vivek Nallur
University College Dublin, Dublin, Ireland

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Collaborating with AI

Auditorium
7 件の発表
2026-04-14 18:00:00
2026-04-14 19:30:00