LLMs Homogenize Values in Constructive Arguments on Value-Laden Topics

要旨

Large language models (LLMs) are increasingly used to promote prosocial and constructive discourse online. Yet little is known about how these models negotiate and shape underlying values when reframing people's arguments on value-laden topics. We conducted experiments with 465 participants from India and the United States, who wrote comments on homophobic and Islamophobic threads, and reviewed human-written and LLM-rewritten constructive versions of these comments. Our analysis shows that LLM systematically diminishes Conservative values while elevating prosocial values such as Benevolence and Universalism. When these comments were read by others, participants opposing same-sex marriage or Islam found human-written comments more aligned with their values, whereas those supportive of these communities found LLM-rewritten versions more aligned with their values. These findings suggest that value homogenization in LLM-mediated prosocial discourse runs the risk of marginalizing conservative viewpoints on value-laden topics and may inadvertently shape the dynamics of online discourse.

著者
Farhana Shahid
Cornell University, Ithaca, New York, United States
Stella Zhang
Cornell University, Ithaca, New York, United States
Aditya Vashistha
Cornell University, Ithaca, New York, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Critical Reflections on AI

P1 - Room 121
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00