AI summaries on social media are reshaping how users form opinions about political topics, yet their influence remains largely unexamined despite their widespread deployment. This paper investigates how two types of AI summaries affect user opinions and engagement: textual summaries of discussion narratives and percentage breakdowns of agreement/disagreement. Through a 144-participant experiment on simulated online discussion threads, we found that displaying commenter agreement percentages amplified social conformity towards the majority views beyond reading comments alone. Conversely, AI narrative summaries created misperceptions of balance in polarised threads, reducing opinion change. While these summaries did not influence participants’ willingness to engage, toxic discussions deterred participation even when participants held majority views. Based on our findings, we provide critical design interventions for industry and researchers to mitigate these tools' polarising effects, paving the way for responsible AI deployment on social media platforms.
ACM CHI Conference on Human Factors in Computing Systems