CoBRA: Programming Cognitive Bias in Social Agents Using Classic Social Science Experiments

要旨

This paper introduces CoBRA, a novel toolkit for systematically specifying agent behavior in LLM-based social simulation. We found that conventional approaches that specify agent behavior through implicit natural-language descriptions often do not yield consistent behavior across models, and the resulting behavior does not capture the nuances of the descriptions. In contrast, CoBRA introduces a model-agnostic way to control agent behavior that lets researchers explicitly specify desired nuances and obtain consistent behavior across models. At the heart of CoBRA is a novel closed-loop system primitive with two components:(1) Cognitive Bias Index that measures the demonstrated cognitive bias of a social agent, by quantifying the agent’s reactions in a set of validated classic social science experiments; (2) Behavioral Regulation Engine that aligns the agent’s behavior to exhibit controlled cognitive bias. Through CoBRA, we show how to operationalize validated social science knowledge (i.e., classical experiments) as reusable “gym” environments for AI—an approach that may generalize to richer social and affective simulations beyond bias alone.

受賞
Best Paper
著者
Xuan Liu
University of California San Diego, La Jolla, California, United States
HaoYang Shang
Shanghai Jiao Tong University, Shanghai, China
Haojian Jin
University of California San Diego, La Jolla, California, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: AI Systems for Human Goals

P1 - Room 122
7 件の発表
2026-04-14 18:00:00
2026-04-14 19:30:00