Dark patterns are user interface elements that can influence a person's behavior against their intentions or best interests. Prior work identified these patterns in websites and mobile apps, but little is known about how the design of platforms might impact dark pattern manifestations and related human vulnerabilities. In this paper, we conduct a comparative study of mobile application, mobile browser, and web browser versions of 105 popular services to investigate variations in dark patterns across modalities. We perform manual tests, identify dark patterns in each service, and examine how they persist or differ by modality. Our findings show that while services can employ some dark patterns equally across modalities, many dark patterns vary between platforms, and that these differences saddle people with inconsistent experiences of autonomy, privacy, and control. We conclude by discussing broader implications for policymakers and practitioners, and provide suggestions for furthering dark patterns research.
https://doi.org/10.1145/3479521
The COVID-19 pandemic lockdown lead to the rapid adoption and use of various groupware applications(“apps”) for remote connection with colleagues, friends, and family. Different factors such as user experiences, trust, and social influences (“user-situational motivations”) were instrumental in determining how and what apps people adopted and used, especially at the onset of the COVID-19 pandemic. In this empirical study, we examine how these factors and four predominant user-situational motivations (i.e., the mandated use of an app by an employer/institution, recommended use of an app by an employer/institution, recommended use of an app by a peer(s), and self-selection of an app) influenced the rapid adoption and use of groupware applications. Specifically, we develop an “emergency adoption model” of groupware applications using 195 valid survey responses to highlight the factors that motivated these apps’ use at the onset of COVID-19pandemic lockdown. We leverage the Technology Adoption Model (TAM) and integrate it with the users’ past use of the application before the COVID-19 lockdown, user-situational motivation, and their privacy-related trust in the application provider to develop a more comprehensive model. Using confirmatory factor analysis (CFA) and structural equation modeling (SEM), we find that the users who used a groupware app in the past continued to use it, and in line with TAM, users’ intention to adopt and use a groupware application was largely driven by the ease-of-use and usefulness of the app. Furthermore, while not a part of the traditionalTAM model, we find that trust in the application provider plays an important role in emergency adoption. However, unlike typical adoption models, the nature of all these effects—most prominently that related to privacy-related trust—depend on the underlying situational motivation. We discuss the implications of these findings and suggest ways to improve the adoption and use of groupware applications, especially during crises like the COVID-19 pandemic.
https://doi.org/10.1145/3479549
Recent advances in 3D reconstruction technology allow people to capture and share their experiences in 3D. However, little is known about people’s sharing preferences and privacy concerns for these reconstructed experiences. To fill this gap, we first present ReliveReality, an experience-sharing method utilizing deep learning-based computer vision techniques to reconstruct clothed humans and 3D environments and estimate 3D pose with only a RGB camera. ReliveReality can be integrated into social virtual environments, allowing others to socially relive a shared experience by moving around the experience from different perspectives, on desktop or in VR. We conducted a 44-participant within-subject study to compare ReliveReality to viewing recorded videos, and to a ReliveReality version with blurring obfuscation. Our results shed light on how people identify with reconstructed avatars, how obfuscation affects reliving experiences and sharing preferences and privacy concerns for reconstructed experiences. We propose design implications for addressing these issues.
https://doi.org/10.1145/3476078
Video game players face a fundamental challenge in managing their competing desires for both privacy and publicity, for being both apart from, and a part of, the communities in which they play. In this paper, we argue that “gamertags” are important tools for protecting gamers’ privacy as well as creative outlets for expressing meaningful aspects of identity. Based on 30 semi-structured interviews focused on players’ usernames, we find through the pseudonyms under which they play, gamers both hide identifying information such as their offline names and addresses while bringing attention to information that is deeply meaningful to them, such as their family nickname or favorite music. By deemphasizing some parts of their identity and by emphasizing others, players not only shape how they are perceived by other gamers, but they also attempt to preclude accidental disclosure of more identifying information. We argue that gamertag practices thus constitute an important form of boundary work through which gamers actively seek to draw lines between their offline and multiple online worlds in the ways that they wish. We argue that gamers use these names to both protect and project aspects of their identities--at times even seeking protection through projection--as a way of addressing their competing desires to both conceal and reveal different aspects of their identities. As boundary work, players’ efforts to carefully protect personally-identifying information and intentionally project personally meaningful information to their communities help them better manage their online identities, relationships with others, and overall data privacy.
https://doi.org/10.1145/3449233
Social media companies wield power over their users through design, policy, and also through their participation in public discourse. We set out to understand how companies leverage public relations to influence expectations of privacy and privacy-related norms. To interrogate the discourse productions of companies in relation to privacy, we examine the blogs associated with three major social media platforms: Facebook, Instagram (both owned by Facebook Inc.), and Snapchat. We analyze privacy-related posts using critical discourse analysis to demonstrate how these powerful entities construct narratives about users and their privacy expectations. We find that each of these platforms often make use of discourse about “vulnerable” identities to invoke relations of power, while at the same time, advancing interpretations and values that favor data capitalism. Finally, we discuss how these public narratives might influence the construction of users’ own interpretations of appropriate privacy norms and conceptions of self. We contend that expectations of privacy and social norms are not simply artifacts of users’ own needs and desires, but co-constructions that reflect the influence of social media companies themselves.
This paper qualitatively examines how members of a large private Facebook group view the risks of information disclosure to their privacy and the strategies they employ to navigate and manage those risks. The paper adds to an emerging interest in how privacy is managed collectively and within dynamic large groups, thus moving beyond established knowledge of privacy management on individual and small-scale levels. The work builds on semi-structured interviews with 20 members of a private Facebook group and draws on Communication Privacy Management theory. The study shows how privacy management practices are enacted at individual, intragroup, and group levels. Findings show that participants associate very high risks with sharing private information in the group, partly because it consists of a mix of known others and strangers, who are potentially geographically co-located. They adopt several strategies for managing and protecting their privacy at all three levels. The risks associated with context, time, and spatial collapse of the imagined audience are identified as important to how participants experience information disclosure in the group. The paper concludes by identifying some practical implications that serve as a call for developers to design privacy tools that support dynamic groups’ privacy challenges and needs.
This paper addresses inconsistencies that exist in the measurement instruments HCI researchers use in cross-cultural studies. We study some commonly used measurement instruments that capture cultural dimensions at an individual level and conduct “measurement invariance tests,” which test whether the questions comprising a construct have similar characteristics across different groups (e.g., countries). We find that these cultural dimensions are, to some extent, non-invariant, making statistical comparisons between countries problematic. Furthermore, we study the (non)invariance of the causal relationship between these cultural dimensions and privacy-related constructs, e.g., privacy concern and the amount of information users share on social media. Our results suggest that in several instances, these cultural dimensions have a different effect on privacy-related constructs per country. This severely reduces their usefulness for developing cross-cultural arguments in cross-country studies. We discuss the value of conducting measurement and causal non-invariance tests and urge scholars to develop more robust means of measuring culture.
Privacy has been conceptualized as a multi-dimensional construct in prior research. However, most multi-dimensional conceptualizations were developed based on populations from western countries. It remains an open question whether the underlying dimensions of privacy stays consistent in non-western countries. Through a series of factor analyses on two survey datasets, we compare the dimensions of privacy concern, information disclosure, general disclosiveness, and privacy management strategies among social network users in the US, China and South Korea. We find significant cross-country differences in the dimensions of these privacy-related concepts, indicating that the fundamental understanding of these concepts varies substantially across these countries. We discuss possible explanations of these cross-country differences and make methodological suggestions for future work.
https://doi.org/10.1145/3449218