(mis)Information & fake news

Paper session

会議の名前
CHI 2020
The Government's Dividend: Complex Perceptions of Social Media Misinformation in China
要旨

The social media environment in China has become the dominant source of information and news over the past decade. This news environment has naturally suffered from challenges related to mis- and dis-information, encumbered by an increasingly complex landscape of factors and players including social media services, fact-checkers, censorship policies, and astroturfing. Interviews with 44 Chinese WeChat users were conducted to understand how individuals perceive misinformation and how it impacts their news consumption practices. Overall, this work exposes the diverse attitudes and coping strategies that Chinese users employ in complex social media environments. Due to the complex nature of censorship in China and participants' lack of understanding of censor-ship, they expressed varied opinions about its influence on the credibility of online information sources. Further, although most participants claimed that their opinions would not be easily swayed by astroturfers, many admitted that they could not effectively distinguish astroturfers from ordinary Internet users. Participants' inability to make sense of comments found online lead many participants to hold pro-censorship attitudes: the Government's Dividend.

キーワード
Social media
fake news
misinformation
trust
astroturfing
著者
Zhicong Lu
University of Toronto, Toronto, ON, Canada
Yue Jiang
University of Maryland, College Park, MD, USA
Cheng Lu
University of Toronto, Toronto, ON, Canada
Mor Naaman
Cornell Tech, New York, NY, USA
Daniel Wigdor
University of Toronto, Toronto, ON, Canada
DOI

10.1145/3313831.3376612

論文URL

https://doi.org/10.1145/3313831.3376612

Fake News on Twitter and Facebook: Investigating How People (Don't) Investigate
要旨

With misinformation proliferating online and more people getting news from social media, it is crucial to understand how people assess and interact with low-credibility posts. This study explores how users react to fake news posts on their Facebook or Twitter feeds, as if posted by someone they follow. We conducted semi-structured interviews with 25 participants who use social media regularly for news, temporarily caused fake news to appear in their feeds with a browser extension unbeknownst to them, and observed as they walked us through their feeds. We found various reasons why people do not investigate low-credibility posts, including taking trusted posters' content at face value, as well as not wanting to spend the extra time. We also document people's investigative methods for determining credibility using both platform affordances and their own ad-hoc strategies. Based on our findings, we present design recommendations for supporting users when investigating low-credibility posts.

キーワード
Misinformation
disinformation
fake news
social media
Facebook
Twitter
trust
verification
著者
Christine Geeng
University of Washington, Seattle, WA, USA
Savanna Yee
University of Washington, Seattle, WA, USA
Franziska Roesner
University of Washington, Seattle, WA, USA
DOI

10.1145/3313831.3376784

論文URL

https://doi.org/10.1145/3313831.3376784

動画
Understanding User Perception of Automated News Generation System
要旨

Automated journalism refers to the generation of news articles using computer programs. Although it is widely used in practice, its user experience and interface design remain largely unexplored. To understand the user perception of an automated news system, we designed NewsRobot, a research prototype that automatically generated news on major events of the PyeongChang 2018 Winter Olympic Games in real-time. It produces six types of news by combining two kinds of content (general/individualized) and three styles (text, text+image, text+image+sound). A total of 30 users participated in using NewsRobot, completing surveys and interviews on their experience. Our findings are as follows: (1) Users preferred individualized news yet considered it less credible, (2) more presentation elements were appreciated but only if their quality was assured, and (3) NewsRobot was considered factual and accurate yet shallow in depth. Based on our findings, we discuss implications for designing automated journalism user interfaces.

キーワード
Automated journalism
Robot journalism
Multimedia modality
automated news generation system
著者
Changhoon Oh
Carnegie Mellon University, Pittsburgh, PA, USA
Jinhan Choi
Seoul National University, Seoul, Republic of Korea
Sungwoo Lee
Seoul National University, Seoul, Republic of Korea
SoHyun Park
Seoul National University, Seoul, Republic of Korea
Daeryong Kim
Seoul National University, Seoul, Republic of Korea
Jungwoo Song
Seoul National University, Seoul, Republic of Korea
Dongwhan Kim
Yonsei University, Seoul, Republic of Korea
Joonhwan Lee
Seoul National University, Seoul, Republic of Korea
Bongwon Suh
Seoul National University, Seoul, Republic of Korea
DOI

10.1145/3313831.3376811

論文URL

https://doi.org/10.1145/3313831.3376811

Disseminating Research News in HCI: Perceived Hazards, How-To's, and Opportunities for Innovation
要旨

Mass media afford researchers critical opportunities to disseminate research findings and trends to the general public. Yet researchers also perceive that their work can be miscommunicated in mass media, thus generating unintended understandings of HCI research by the general public. We conduct a Grounded Theory analysis of interviews with 12 HCI researchers and find that miscommunication can occur at four origins along the socio-technical infrastructure known as the Media Production Pipeline (MPP) for science news. Results yield researchers' perceived hazards of disseminating their work through mass media, as well as strategies for fostering effective communication of research. We conclude with implications for augmenting or innovating new MPP technologies.

キーワード
Media Production Pipeline
Science Communications
Journalism
Miscommunication
Mass Media
Mass Communication
News Production
著者
C. Estelle Smith
University of Minnesota, Minneapolis, MN, USA
Eduardo Nevarez
University of Minnesota, Minneapolis, MN, USA
Haiyi Zhu
Carnegie Mellon University, Pittsburgh, PA, USA
DOI

10.1145/3313831.3376744

論文URL

https://doi.org/10.1145/3313831.3376744

Will the Crowd Game the Algorithm? Using Layperson Judgments to Combat Misinformation on Social Media by Downranking Distrusted Sources
要旨

How can social media platforms fight the spread of misinformation? One possibility is to use newsfeed algorithms to downrank content from sources that users rate as untrustworthy. But will laypeople be handicapped by motivated reasoning or lack of expertise, and thus unable to identify misinformation sites? And will they "game" this crowdsourcing mechanism in order to promote content that aligns with their partisan agendas? We conducted a survey experiment in which =984 Americans indicated their trust in numerous news sites. To study the tendency of people to game the system, half of the participants were told their responses would inform social media ranking algorithms. Participants trusted mainstream sources much more than hyper-partisan or fake news sources, and their ratings were highly correlated with professional fact-checker judgments. Critically, informing participants that their responses would influence ranking algorithms did not diminish these results, despite the manipulation increasing the political polarization of trust ratings.

キーワード
Misinformation
Crowdsourcing
Social Media
著者
Ziv Epstein
Massachusetts Institute of Technology, Cambridge, MA, USA
Gordon Pennycook
University of Regina, Regina, SK, Canada
David Rand
Massachusetts Institute of Technology, Cambridge, MA, USA
DOI

10.1145/3313831.3376232

論文URL

https://doi.org/10.1145/3313831.3376232