この勉強会は終了しました。ご参加ありがとうございました。
The social media environment in China has become the dominant source of information and news over the past decade. This news environment has naturally suffered from challenges related to mis- and dis-information, encumbered by an increasingly complex landscape of factors and players including social media services, fact-checkers, censorship policies, and astroturfing. Interviews with 44 Chinese WeChat users were conducted to understand how individuals perceive misinformation and how it impacts their news consumption practices. Overall, this work exposes the diverse attitudes and coping strategies that Chinese users employ in complex social media environments. Due to the complex nature of censorship in China and participants' lack of understanding of censor-ship, they expressed varied opinions about its influence on the credibility of online information sources. Further, although most participants claimed that their opinions would not be easily swayed by astroturfers, many admitted that they could not effectively distinguish astroturfers from ordinary Internet users. Participants' inability to make sense of comments found online lead many participants to hold pro-censorship attitudes: the Government's Dividend.
With misinformation proliferating online and more people getting news from social media, it is crucial to understand how people assess and interact with low-credibility posts. This study explores how users react to fake news posts on their Facebook or Twitter feeds, as if posted by someone they follow. We conducted semi-structured interviews with 25 participants who use social media regularly for news, temporarily caused fake news to appear in their feeds with a browser extension unbeknownst to them, and observed as they walked us through their feeds. We found various reasons why people do not investigate low-credibility posts, including taking trusted posters' content at face value, as well as not wanting to spend the extra time. We also document people's investigative methods for determining credibility using both platform affordances and their own ad-hoc strategies. Based on our findings, we present design recommendations for supporting users when investigating low-credibility posts.
Automated journalism refers to the generation of news articles using computer programs. Although it is widely used in practice, its user experience and interface design remain largely unexplored. To understand the user perception of an automated news system, we designed NewsRobot, a research prototype that automatically generated news on major events of the PyeongChang 2018 Winter Olympic Games in real-time. It produces six types of news by combining two kinds of content (general/individualized) and three styles (text, text+image, text+image+sound). A total of 30 users participated in using NewsRobot, completing surveys and interviews on their experience. Our findings are as follows: (1) Users preferred individualized news yet considered it less credible, (2) more presentation elements were appreciated but only if their quality was assured, and (3) NewsRobot was considered factual and accurate yet shallow in depth. Based on our findings, we discuss implications for designing automated journalism user interfaces.
Mass media afford researchers critical opportunities to disseminate research findings and trends to the general public. Yet researchers also perceive that their work can be miscommunicated in mass media, thus generating unintended understandings of HCI research by the general public. We conduct a Grounded Theory analysis of interviews with 12 HCI researchers and find that miscommunication can occur at four origins along the socio-technical infrastructure known as the Media Production Pipeline (MPP) for science news. Results yield researchers' perceived hazards of disseminating their work through mass media, as well as strategies for fostering effective communication of research. We conclude with implications for augmenting or innovating new MPP technologies.
How can social media platforms fight the spread of misinformation? One possibility is to use newsfeed algorithms to downrank content from sources that users rate as untrustworthy. But will laypeople be handicapped by motivated reasoning or lack of expertise, and thus unable to identify misinformation sites? And will they "game" this crowdsourcing mechanism in order to promote content that aligns with their partisan agendas? We conducted a survey experiment in which =984 Americans indicated their trust in numerous news sites. To study the tendency of people to game the system, half of the participants were told their responses would inform social media ranking algorithms. Participants trusted mainstream sources much more than hyper-partisan or fake news sources, and their ratings were highly correlated with professional fact-checker judgments. Critically, informing participants that their responses would influence ranking algorithms did not diminish these results, despite the manipulation increasing the political polarization of trust ratings.