While people's perceptions of targeted ads have been studied extensively from a Western perspective (e.g., North America, Europe), we know little about users’ perceptions in the South Asian region. We interviewed 40 participants from two South Asian countries, Bangladesh and India, to explore their perceptions and practices regarding targeted ads on social media platforms. Participants identified emerging ad types, such as influencer-based ads and soft ads, through articles. In addition, participants often outweighed discounts over product quality when viewing ads. We also observed novel user mental models of targeted ads based on mobile app permissions and excessive AI usage. Participants often preferred ad control over transparency. While most participants rarely used ad settings, some controlled ads by changing mobile app permissions or muting ads on social media platforms. Participants also raised concerns about fraudulent targeted ads and privacy violations due to device sharing. We present potential design ideas to mitigate these concerns.
https://doi.org/10.1145/3544548.3581498
Virtual influencers (VI) are on the rise on Instagram, and companies increasingly cooperate with them for marketing campaigns. This has motivated an increasing number of studies, which investigate our perceptions of these influencers. Most studies propose that VI are often rated lower in perceived trust and higher in uncanniness. Yet, we still lack a deeper understanding as to why this is the case. We conduct 2 studies: 1) a questionnaire with 150 participants to get the general perception for the included influencers, and 2) an electroencephalography (EEG) study to get insights into the underlying neural mechanisms of influencer perception. Our results support findings from related works regarding lower trust and higher uncanniness associated with VI. Interestingly, the EEG components N400 and LPP did not modulate perceived trust, but rather perceived humanness, uncanniness, and intentions to follow recommendations. This provides a fruitful beginning for future research on virtual humans.
https://doi.org/10.1145/3544548.3580943
Misinformation has become a regular occurrence in our lives with many different approaches being sought to address it. One effective way to combat misinformation is for trusted individuals (e.g., family members) to challenge the misinformed person. However, less is known about how these conversations between trusted individuals occur, and how they may impact on relationships. We look to address this gap by conducting semi-structured interviews with family members in the UK who have experienced misinformation within their family networks. We identify several barriers individuals face when challenging misinformed family members, such as the misinformed person's personality and the extent that pre-conceptions influence beliefs. We also find individuals developing strategies to overcome these barriers, and to cope with difficulties that arise through these conversations. Despite technology being the main driver for misinformation spread, we find it has limitations when used to facilitate or mediate conversations for challenging misinformation between family members.
https://doi.org/10.1145/3544548.3581202
Assessing the trustworthiness of information online is complicated. Literacy-based paradigms are both widely used to help and widely critiqued. We conducted a study with 35 Gen Zers from across the U.S. to understand how they assess information online. We found that they tended to encounter---rather than search for---information, and that those encounters were shaped more by social motivations than by truth-seeking queries. For them, information processing is fundamentally a social practice. Gen Zers interpreted online information together, as aspirational members of social groups. Our participants sought information sensibility: a socially-informed awareness of the value of information encountered online. We outline key challenges they faced and practices they used to make sense of information. Our findings suggest that, like their information sensibility practices, solutions and strategies to address misinformation should be embedded in social contexts online.
https://doi.org/10.1145/3544548.3581328
Tech companies that rely on ads for business argue that users have control over their data via ad privacy settings. However, these ad settings are often hidden. This work aims to inform the design of findable ad controls and study their impact on users’ behavior and sentiment. We iteratively designed ad control interfaces that varied in the setting's (1) entry point (within ads, at the feed’s top) and (2) level of actionability, with high actionability directly surfacing links to specific advertisement settings, and low actionability pointing to general settings pages (which is reminiscent of companies' current approach to ad controls). We built a Chrome extension that augments Facebook with our experimental ad control interfaces and conducted a between-subjects online experiment with 110 participants. Results showed that entry points within ads or at the feed’s top, and high actionability interfaces, both increased Facebook ad settings’ findability and discoverability, as well as participants' perceived usability of them. High actionability also reduced users' effort in finding ad settings. Participants perceived high and low actionability as equally usable, which shows it is possible to design more actionable ad controls without overwhelming users. We conclude by emphasizing the importance of regulation to provide specific and research-informed requirements to companies on how to design usable ad controls.
https://doi.org/10.1145/3544548.3580773
Fact-checking messages are shared or ignored subjectively. Users tend to seek like-minded information and ignore information that conflicts with their preexisting beliefs, leaving like-minded misinformation uncontrolled on the Internet. To understand the factors that distract fact-checking engagement, we investigated the psychological characteristics associated with users’ selective avoidance of clicking uncongenial facts. In a pre-registered experiment, we measured participants’ (N = 506) preexisting beliefs about COVID-19-related news stimuli. We then examined whether they clicked on fact-checking links to false news that they believed to be accurate. We proposed an index that divided participants into fact-avoidance and fact-exposure groups using a mathematical baseline. The results indicated that 43% of participants selectively avoided clicking on uncongenial facts, keeping 93% of their false beliefs intact. Reflexiveness is the psychological characteristic that predicts selective avoidance. We discuss susceptibility to click bias that prevents users from utilizing fact-checking websites and the implications for future design.
https://doi.org/10.1145/3544548.3580826