Political organizations worldwide keep innovating their use of social media technologies. This study describes a novel configuration of technologies and organizational forms for the manipulation of Twitter trends. In the 2019 Indian general election, campaign organizers used a network of WhatsApp groups to coordinate mass-postings by loosely affiliated supporters. To investigate the campaigns, we joined more than 600 political WhatsApp groups that support the Bharatiya Janata Party, the right-wing party that won the general election. We found direct evidence of 75 hashtag manipulation campaigns, including mobilization messages and lists of pre-written tweets. We estimate the campaigns' size and whether they succeeded in creating controlled narratives on social media. The findings show that the campaigns are smaller than what other reports suggested; however, the strategy reliably produced Twitter trends through the voices of loosely affiliated supporters. Centrally controlled but voluntary in participation, this novel configuration of a campaign complicates the debates over the legitimate use of digital tools for political participation. It may have provided a blueprint for participatory media manipulation by a party with popular support.
https://doi.org/10.1145/3479523
To improve news literacy and consequently reduce the harm of misinformation, we designed and developed a game that emulates a social media feed. We tracked user interactions with articles from mainstream and low-credibility sources during a 19-month deployment of the game. The game achieved its objective of priming players to be suspicious of questionable content. As players interacted with more articles, they significantly improved their skills in spotting mainstream content, thus confirming the utility of the game for improving news literacy. Semi-structured interviews revealed that the players found the game to be simple, fun and educational. The principles and mechanisms used by the game could serve to inform social media functionality that helps people distinguish between credible and questionable content they encounter in their news feeds.
https://doi.org/10.1145/3449080
As news is increasingly spread through social media platforms, the problem of identifying misleading or false information (colloquially called "fake news") has come into sharp focus. There are many factors which may help users judge the accuracy of news articles, ranging from the text itself to meta-data like the headline, an image, or the bias of the originating source. In this research, participants (n = 175) of various political ideological leaning categorized news articles as real or fake based on either article text or meta-data. We used a mixed methods approach to investigate how various article elements (news title, image, source bias, and excerpt) impact users accuracy in identifying real and fake news. We also compared human performance to automated detection based on the same article elements and find that automated techniques were more accurate than our human sample while in both cases the best performance came not from the article text itself but when focusing on some elements of meta-data. Adding the source bias does not help humans, but does help computer automated detectors. Open-ended responses suggested that the image in particular may be a salient element for humans detecting fake news.
https://doi.org/10.1145/3449183
When users on social media share content without considering its veracity, they may unwittingly be spreading misinformation. In this work, we investigate the design of lightweight interventions that nudge users to assess the accuracy of information as they share it. Such assessment may deter users from posting misinformation in the first place, and their assessments may also provide useful guidance to friends aiming to assess those posts themselves. In support of lightweight assessment, we first develop a taxonomy of the reasons why people believe a news claim is or is not true; this taxonomy yields a checklist that can be used at posting time. We conduct evaluations to demonstrate that the checklist is an accurate and comprehensive encapsulation of people's free-response rationales. In a second experiment, we study the effects of three behavioral nudges---1) checkboxes indicating whether headings are accurate, 2) tagging reasons (from our taxonomy) that a post is accurate via a checklist and 3) providing free-text rationales for why a headline is or is not accurate---on people's intention of sharing the headline on social media. From an experiment with 1668 participants, we find that all three nudges reduce the sharing of false content. They also reduce the sharing of true content, but to a lesser degree that yields an overall decrease in the fraction of shared content that is false. Our findings have implications for designing social media and news sharing platforms that draw from richer signals of content credibility contributed by users. In addition, our validated taxonomy can be used by platforms and researchers as a way to gather rationales in an easier fashion than free-response.
https://doi.org/10.1145/3449092
Social media platforms have been exploited to conduct election interference in recent years. In particular, the Russian-backed Internet Research Agency (IRA) has been identified as a key source of misinformation spread on Twitter prior to the 2016 U.S. presidential election. The goal of this research is to understand whether general Twitter users changed their behavior in the year following first contact from an IRA account. We compare the before and after behavior of contacted users to determine whether there were differences in their mean tweet count, the sentiment of their tweets, and the frequency and sentiment of tweets mentioning@realDonaldTrump or @HillaryClinton. Our results indicate that users overall exhibited statistically significant changes in behavior across most of these metrics, and that those users that engaged with the IRA generally showed greater changes in behavior.
https://doi.org/10.1145/3449164
How do people come to believe conspiracy theories, and what role does the internet play in this process as a socio-technical system? We explore these questions by examining online participants in the "chemtrails" conspiracy, the idea that visible condensation trails behind airliners are deliberately sprayed for nefarious purposes. We apply Weick's theory of sensemaking to examine the role of people's frames (beliefs and worldviews), as well as the socio-technical contexts (social interactions and technological affordances) for processing informational cues about the conspiracy. Through an analysis of in-depth interviews with thirteen believers and seven ex-believers, we find that many people become curious about chemtrails after consuming rich online media, and they later find welcoming online communities to support shared beliefs and worldviews. We discuss how the socio-technical context of the internet may inadvertently trap people in a perpetual state of ambiguity that becomes reinforced through a collective sensemaking process. In addition, we show how the conspiracy offers a way for believers to express their dissatisfaction with authority, enjoy a sense of community, and find some entertainment along the way. Finally, we discuss how people's frames and the various socio-technical contexts of the internet are important in the sensemaking of debunking evidence, and how such factors may function in the rejection of conspiratorial beliefs.
https://doi.org/10.1145/3479598
As news organizations embrace transparency practices on their websites to distinguish themselves from those spreading misinformation, HCI designers have the opportunity to help them effectively utilize the ideals of transparency to build trust. How can we utilize transparency to promote trust in news? We examine this question through a qualitative lens by interviewing journalists and news consumers---the two stakeholders in a news system. We designed a scenario to demonstrate transparency features using two fundamental news attributes that convey the trustworthiness of a news article: source and message. In the interviews, our news consumers expressed the idea that news transparency could be best shown by providing indicators of objectivity in two areas (news selection and framing) and by providing indicators of evidence in four areas (presence of source materials, anonymous sourcing, verification, and corrections upon erroneous reporting). While our journalists agreed with news consumers' suggestions of using evidence indicators, they also suggested additional transparency indicators in areas such as the news reporting process and personal/organizational conflicts of interest. Prompted by our scenario, participants offered new design considerations for building trustworthy news platforms, such as designing for easy comprehension, presenting appropriate details in news articles (e.g., showing the number and nature of corrections made to an article), and comparing attributes across news organizations to highlight diverging practices. Comparing the responses from our two stakeholder groups reveals conflicting suggestions with trade-offs between them. Our study has implications for HCI designers in building trustworthy news systems.
https://doi.org/10.1145/3479539
Struggling to curb misinformation, social media platforms are experimenting with design interventions to enhance consumption of credible news on their platforms. Some of these interventions, such as the use of warning messages, are examples of nudges---a choice-preserving technique to steer behavior. Despite their application, we do not know whether nudges could steer people into making conscious news credibility judgments online and if they do, under what constraints. To answer, we combine nudge techniques with heuristic based information processing to design NudgeCred--a browser extension for Twitter. NudgeCred directs users' attention to two design cues: authority of a source and other users' collective opinion on a report by activating three design nudges---Reliable, Questionable, and Unreliable, each denoting particular levels of credibility for news tweets. In a controlled experiment, we found that NudgeCred significantly helped users (n=430) distinguish news tweets' credibility, unrestricted by three behavioral confounds---political ideology, political cynicism, and media skepticism. A five-day field deployment with twelve participants revealed that NudgeCred improved their recognition of news items and attention towards all of our nudges, particularly towards Questionable. Among other considerations, participants proposed that designers should incorporate heuristics that users' would trust. Our work informs nudge-based system design approaches for online media.
https://doi.org/10.1145/3479571