Perverse Downstream Consequences of Debunking: Being Corrected by Another User for Posting False Political News Increases Subsequent Sharing of Low Quality, Partisan, and Toxic Content in a Twitter Field Experiment

要旨

A prominent approach to combating online misinformation is to debunk false content. Here we investigate downstream consequences of social corrections on users’ subsequent sharing of other content. Being corrected might make users more attentive to accuracy, thus improving their subsequent sharing. Alternatively, corrections might not improve subsequent sharing - or even backfire - by making users feel defensive, or by shifting their attention away from accuracy (e.g., towards various social factors). We identified N=2,000 users who shared false political news on Twitter, and replied to their false tweets with links to fact-checking websites. We find causal evidence that being corrected decreases the quality, and increases the partisan slant and language toxicity, of the users’ subsequent retweets (but has no significant effect on primary tweets). This suggests that being publicly corrected by another user shifts one’s attention away from accuracy - presenting an important challenge for social correction approaches.

著者
Mohsen Mosleh
University of Exeter Business School, Exeter, United Kingdom
Cameron Martel
MIT, Cambrdige, Massachusetts, United States
Dean Eckles
MIT, Cambridge, Massachusetts, United States
David Rand
MIT, Cambridge, Massachusetts, United States
DOI

10.1145/3411764.3445642

論文URL

https://doi.org/10.1145/3411764.3445642

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Trust, Transparency & Sharing Online

[B] Paper Room 07, 2021-05-14 01:00:00~2021-05-14 03:00:00 / [C] Paper Room 07, 2021-05-14 09:00:00~2021-05-14 11:00:00 / [A] Paper Room 07, 2021-05-13 17:00:00~2021-05-13 19:00:00
Paper Room 07
14 件の発表
2021-05-14 01:00:00
2021-05-14 03:00:00
日本語まとめ
読み込み中…