Underspecified Human Decision Experiments Considered Harmful

要旨

Decision-making with information displays is a key focus of research in areas like human-AI collaboration and data visualization. However, what constitutes a decision problem, and what is required for an experiment to conclude that decisions are flawed, remain imprecise. We present a widely applicable definition of a decision problem synthesized from statistical decision theory and information economics. We claim that to attribute loss in human performance to bias, an experiment must provide the information that a rational agent would need to identify the normative decision. We evaluate whether recent empirical research on AI-assisted decisions achieves this standard. We find that only 10 (26\%) of 39 studies that claim to identify biased behavior presented participants with sufficient information to make this claim in at least one treatment condition. We motivate the value of studying well-defined decision problems by describing a characterization of performance losses they allow to be conceived.

著者
Jessica Hullman
Northwestern University, Evanston, Illinois, United States
Alex Kale
University of Chicago, Chicago, Illinois, United States
Jason Hartline
Northwestern U, Evanston, Illinois, United States
DOI

10.1145/3706598.3714063

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714063

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Decision Making and Analysis

G414+G415
6 件の発表
2025-04-30 20:10:00
2025-04-30 21:40:00
日本語まとめ
読み込み中…