Disagree? You Must Be a Bot! How Beliefs Shape Twitter Profile Perceptions

要旨

In this paper, we investigate the human ability to distinguish political social bots from humans on Twitter. Following motivated reasoning theory from social and cognitive psychology, our central hypothesis is that especially those accounts which are opinion-incongruent are perceived as social bot accounts when the account is ambiguous about its nature. We also hypothesize that credibility ratings mediate this relationship. We asked N = 151 participants to evaluate 24 Twitter accounts and decide whether the accounts were humans or social bots. Findings support our motivated reasoning hypothesis for a sub-group of Twitter users (those who are more familiar with Twitter): Accounts that are opinion-incongruent are evaluated as relatively more bot-like than accounts that are opinion-congruent. Moreover, it does not matter whether the account is clearly social bot or human or ambiguous about its nature. This was mediated by perceived credibility in the sense that congruent profiles were evaluated to be more credible resulting in lower perceptions as bots.

著者
Magdalena Wischnewski
University of Duisburg-Essen, Duisburg, Germany
Rebecca Bernemann
University of Duisburg-Essen, Duisburg, Germany
Thao Ngo
University of Duisburg-Essen, Duisburg, Germany
Nicole Krämer
University of Duisburg-Essen, Duisburg, Germany
DOI

10.1145/3411764.3445109

論文URL

https://doi.org/10.1145/3411764.3445109

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Human-AI, Automation, Vehicles & Drones / Trust & Explainability

[A] Paper Room 15, 2021-05-13 17:00:00~2021-05-13 19:00:00 / [B] Paper Room 15, 2021-05-14 01:00:00~2021-05-14 03:00:00 / [C] Paper Room 15, 2021-05-14 09:00:00~2021-05-14 11:00:00
Paper Room 15
12 件の発表
2021-05-13 17:00:00
2021-05-13 19:00:00
日本語まとめ
読み込み中…