Content-curating algorithms provide a crucial service for social media users by surfacing relevant content, but they can also bring about harms when their objectives are misaligned with user values and welfare. Yet, few controlled experiments on the potential behavioral and cognitive consequences of this alignment problem exist. In a preregistered, two-wave, collaborative filtering experiment (total N=1,500), we demonstrate that simple changes to how posts are sampled and ranked can affect the beliefs people form. Our results show observable differences in two types of outcomes within statistically constructed groups: belief accuracy and consensus. We find partial support for hypotheses that the recently proposed approaches of "bridging-based ranking" and "intelligence-based ranking" promote consensus and belief accuracy, respectively. We also find that while personalized, engagement-based ranking promotes posts that participants perceive favorably, it simultaneously leads those participants to form more polarized and less accurate beliefs than any of the other algorithms considered.
ACM CHI Conference on Human Factors in Computing Systems