Trust in high-profile election forecasts influences the public’s confidence in democratic processes and electoral integrity. Yet, maintaining trust after unexpected outcomes like the 2016 U.S. presidential election is a significant challenge. Our work confronts this challenge through three experiments that gauge trust in election forecasts. We generate simulated U.S. presidential election forecasts, vary win probabilities and outcomes, and present them to participants in a professional-looking website interface. In this website interface, we explore (1) four different uncertainty displays, (2) a technique for subjective probability correction, and (3) visual calibration that depicts an outcome with its forecast distribution. Our quantitative results suggest that text summaries and quantile dotplots engender the highest trust over time, with observable partisan differences. The probability correction and calibration show small-to-null effects on average. Complemented by our qualitative results, we provide design recommendations for conveying U.S. presidential election forecasts and discuss long-term trust in uncertainty communication. We provide preregistration, code, data, model files, and videos at https://doi.org/10.17605/OSF.IO/923E7.
https://doi.org/10.1145/3613904.3642371
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)