Simple changes to content curation algorithms affect the beliefs people form in a collaborative filtering experiment

要旨

Content-curating algorithms provide a crucial service for social media users by surfacing relevant content, but they can also bring about harms when their objectives are misaligned with user values and welfare. Yet, few controlled experiments on the potential behavioral and cognitive consequences of this alignment problem exist. In a preregistered, two-wave, collaborative filtering experiment (total N=1,500), we demonstrate that simple changes to how posts are sampled and ranked can affect the beliefs people form. Our results show observable differences in two types of outcomes within statistically constructed groups: belief accuracy and consensus. We find partial support for hypotheses that the recently proposed approaches of "bridging-based ranking" and "intelligence-based ranking" promote consensus and belief accuracy, respectively. We also find that while personalized, engagement-based ranking promotes posts that participants perceive favorably, it simultaneously leads those participants to form more polarized and less accurate beliefs than any of the other algorithms considered.

受賞
Honorable Mention
著者
Jason W.. Burton
University of Copenhagen, Copenhagen, Denmark
Stefan M. Herzog
Max Planck Institute for Human Development, Berlin, Germany
Philipp Lorenz-Spreen
Dresden University of Technology, Dresden, Germany

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Quantifying the Algorithmic Lens

P1 - Room 131
7 件の発表
2026-04-13 20:15:00
2026-04-13 21:45:00