How Visualizing Inferential Uncertainty Can Mislead Readers About Treatment Effects in Scientific Results

要旨

When presenting visualizations of experimental results, scientists often choose to display either inferential uncertainty (e.g., uncertainty in the estimate of a population mean) or outcome uncertainty (e.g., variation of outcomes around that mean) about their estimates. How does this choice impact readers' beliefs about the size of treatment effects? We investigate this question in two experiments comparing 95% confidence intervals (means and standard errors) to 95% prediction intervals (means and standard deviations). The first experiment finds that participants are willing to pay more for and overestimate the effect of a treatment when shown confidence intervals relative to prediction intervals. The second experiment evaluates how alternative visualizations compare to standard visualizations for different effect sizes. We find that axis rescaling reduces error, but not as well as prediction intervals or animated hypothetical outcome plots (HOPs), and that depicting inferential uncertainty causes participants to underestimate variability in individual outcomes.

受賞
Honorable Mention
キーワード
Uncertainty visualization
effect sizes
judgment and decision making
confidence intervals
prediction intervals
著者
Jake M. Hofman
Microsoft Research, New York, NY, USA
Daniel G. Goldstein
Microsoft Research, New York, NY, USA
Jessica Hullman
Northwestern University, Evanston, IL, USA
DOI

10.1145/3313831.3376454

論文URL

https://doi.org/10.1145/3313831.3376454

動画

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Seeing (might be) believing

Paper session
316A MAUI
5 件の発表
2020-04-29 20:00:00
2020-04-29 21:15:00
日本語まとめ
読み込み中…