Visualization supports exploratory data analysis (EDA), but EDA frequently presents spurious charts, which can mislead people into drawing unwarranted conclusions. We investigate interventions to prevent false discovery from visualized data. We evaluate whether eliciting analyst beliefs helps guard against the over-interpretation of noisy visualizations. In two experiments, we exposed participants to both spurious and 'true' scatterplots, and assessed their ability to infer data-generating models that underlie those samples. Participants who underwent prior belief elicitation made 21% more correct inferences along with 12% fewer false discoveries. This benefit was observed across a variety of sample characteristics, suggesting broad utility to the intervention. However, additional interventions to highlight counterevidence and sample uncertainty did not provide a significant advantage. Our findings suggest that lightweight, belief-driven interactions can yield a reliable, if moderate, reduction in false discovery. This work also suggests future directions to improve visual inference and reduce bias. The data and materials for this paper are available at https://osf.io/52u6v/
https://doi.org/10.1145/3544548.3580808
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)