Power Echoes: Investigating Moderation Biases in Online Power-Asymmetric Conflicts

要旨

Online power-asymmetric conflicts are prevalent, and most platforms rely on human moderators to conduct moderation currently. Previous studies have been continuously focusing on investigating human moderation biases in different scenarios, while moderation biases under power-asymmetric conflicts remain unexplored. Therefore, we aim to investigate the types of power-related biases human moderators exhibit in power-asymmetric conflict moderation (RQ1) and further explore the influence of AI's suggestions on these biases (RQ2). For this goal, we conducted a mixed design experiment with 50 participants by leveraging the real conflicts between consumers and merchants as a scenario. Results suggest several biases towards supporting the powerful party within these two moderation modes. AI assistance alleviates most biases of human moderation, but also amplifies a few. Based on these results, we propose several insights into future research on human moderation and human-AI collaborative moderation systems for power-asymmetric conflicts.

著者
Yaqiong Li
Fudan University, Shanghai, China
Peng Zhang
Fudan University, Shanghai, China
Peixu Hou
Meituan, Shanghai, Shanghai, China
Kainan Tu
Fudan University, Shanghai, China
Guangping Zhang
Fudan University, Shanghai, China
Shan Qu
Meituan, Shanghai, China
Wenshi Chen
Meituan, Shanghai, China
Yan Chen
Virginia Tech, Blacksburg, Virginia, United States
Ning Gu
Fudan University, Shanghai, China
Tun Lu
Fudan University, Shanghai, China

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Algorithmic Power, Justice and Repression

Auditorium
7 件の発表
2026-04-15 20:15:00
2026-04-15 21:45:00