この勉強会は終了しました。ご参加ありがとうございました。
In 2020, there were widespread Black Lives Matter (BLM) protests in the U.S. Because many attendees were novice protesters, organizations distributed guides for staying safe at a protest, often including security and privacy advice. To understand what advice novice protesters are given, we collected 41 safety guides distributed during BLM protests in spring 2020. We identified 13 classes of digital security and privacy advice in these guides. To understand whether this advice influences protesters, we surveyed 167 BLM protesters. Respondents reported an array of security and privacy concerns, and their concerns were magnified when considering fellow protesters. While most respondents reported being aware of, and following, certain advice (e.g., choosing a strong phone passcode), many were unaware of key advice like using end-to-end encrypted messengers and disabling biometric phone unlocking. Our results can guide future advice and technologies to help novice protesters protect their security and privacy.
This paper provides empirical evidence of a link between the Fear of Missing Out (FoMO) and reluctant privacy behaviours, to help explain a gap between users' privacy attitudes and their behaviours online (also known as the Privacy Paradox). Using Grounded Theory, we interviewed 25 participants and created a high-level empirically-grounded theory of the relationship between FoMO and reluctant privacy behaviours. We identify three main dimensions in which users feel pressured to participate even when they have privacy concerns, to avoid missing out. We discuss the implications of these results on the design of technologies, and how they may indicate systemic dark design.
Algorithms engineered to leverage rich behavioral and biometric data to predict individual attributes and actions continue to permeate public and private life. A fundamental risk may emerge from misconceptions about the sensitivity of such data, as well as the agency of individuals to protect their privacy when fine-grained (and possibly involuntary) behavior is tracked. In this work, we examine how individuals adjust their behavior when incentivized to avoid the algorithmic prediction of their intent. We present results from a virtual reality task in which gaze, movement, and other physiological signals are tracked. Participants are asked to decide which card to select without an algorithmic adversary anticipating their choice. We find that while participants use a variety of strategies, data collected remains highly predictive of choice (80% accuracy). Additionally, a significant portion of participants became more predictable despite efforts to obfuscate, possibly indicating mistaken priors about the dynamics of algorithmic prediction.
"There's an app for that" is perhaps the definitive rhetoric of our times. To understand how users navigate the trade-offs involved in using apps that support a variety of everyday activities, we conducted scenario-based semi-structured interviews (n = 25). Despite the technical and regulatory mechanisms that are supposedly meant to empower users to manage their privacy, we found that users express an overarching feeling of resignation regarding privacy matters. Because these apps provide convenience and other benefits, as one participant put it, "there is a very fine line" that marks the divide between feeling empowered in the use of technology and coping with the discomfort and creepiness arising from invasive app behavior. Participants consistently expressed being resigned to disclose data even as they accepted personal responsibility for their own privacy. We apply the findings to discuss the limits of empowerment as a design logic for privacy-oriented solutions.
Research has demonstrated that users' heuristic decision-making processes cause external factors like defaults and framing to influence the outcome of privacy decisions. Proponents of ``privacy nudging'' have proposed leveraging these effects to guide users' decisions. Our research shows that defaults and framing not only influence the outcome of privacy decisions, but also the process of evaluating the contextual factors associated with the decision, effectively making the decision-making process more heuristic. In our analysis of an existing dataset of scenario-based smart home privacy decisions, we demonstrate that defaults and framing not only have a direct effect on participants' decisions; they also moderate the effect of their cognitive appraisals of the presented scenarios on the decision. These results suggest that nudges like defaults and framing exacerbate the well-researched problem that people often employ heuristics rather than making deliberate privacy decisions, and that privacy-setting interfaces should avoid the effects of heuristic decision-making.
Designing technologies that support the mutual cybersecurity and autonomy of older adults facing cognitive challenges requires close collaboration of partners. As part of research to design a Safety Setting application for older adults with memory loss or mild cognitive impairment (MCI), we use a scenario-based participatory design. Our study builds on previous findings that couples’ approach to memory loss was characterized by a desire for flexibility and choice, and an embrace of role uncertainty. We find that couples don’t want a system that fundamentally alters their relationship and are looking to maximize self-surveillance competence and minimize loss of autonomy for their partners. All desire Safety Settings to maintain their mutual safety rather than designating one partner as the target of oversight. Couples are open to more rigorous surveillance if they have control over what types of activities trigger various levels of oversight.
Smart home products aren't living up to their promise. They claim to transform the way we live, providing convenience, energy efficiency, and safety. However, the reality is significantly less profound and often frustrating. This is particularly apparent in security and privacy experiences: powerlessness, confusion, and annoyance have all been reported.
In order to reduce frustration and help fulfill the promise of smart homes, we need to explore the experience of security and privacy in situ. We analyze an ethnographic study observing six UK households over six months to present a longitudinal view of security and privacy user experiences in smart products. We find inconsistencies in managing security and privacy, e.g., contrasting the ease of granting and difficulty of withholding consent. We identify security and privacy issues in repurposing smart home devices – using devices outside of their initial intended purposes. We conclude with recommendations for design in smart home devices.
We investigate how people's `humor style' relates to their online photo-sharing behaviors and reactions to `privacy primes'.
In an online experiment, we queried 437 participants about their humor style, likelihood to share photo-memes, and history of sharing others' photos. In two treatment conditions, participants were either primed to imagine themselves
as the photo-subjects or to consider the photo-subjects’ privacy before sharing memes.
We found that participants who frequently use aggressive and self-deprecating humor were more likely to violate others' privacy by sharing photos. We also replicated the interventions' paradoxical effects~-- \textit{increasing} sharing likelihood~-- as reported in earlier work and identified the subgroups that demonstrated this behavior through interaction analyses. When primed to consider the subjects' privacy, only humor deniers (participants who use humor \textit{infrequently}) demonstrated \textit{increased} sharing. In contrast, when imagining themselves as the photo-subjects, humor deniers, unlike other participants, \textit{did not increase} the sharing of photos.
Individuals are known to lie and/or provide untruthful data when providing information online as a way to protect their privacy. Prior studies have attempted to explain when and why individuals lie online. However, no work has examined into how people lie online, i.e. the specific strategies they follow to provide untruthful data, or attempted to predict whether people would be truthful or not depending on the specific question/data. To close this gap, we present a large-scale study with over 800 participants. Based on it, we show that it is possible to predict whether users are truthful or not using machine learning with very high accuracy (89.7%). We also identify four main strategies people employ to provide untruthful data and show the factors that influence the choices of their strategies. We discuss the implications of findings and argue that understanding privacy lies at this level can help both users and data collectors.
Smart Home Personal Assistants (SPA) have a complex ecosystem that enables them to carry out various tasks on behalf of the user with just voice commands. SPA capabilities are continually growing, with over a hundred thousand third-party skills in Amazon Alexa, covering several categories, from tasks within the home (e.g. managing smart devices) to tasks beyond the boundaries of the home (e.g. purchasing online, booking a ride). In the SPA ecosystem, information flows through several entities including SPA providers, third-party skills providers, providers of Smart Devices, other users and external parties. Prior studies have not explored privacy norms in the SPA ecosystem, i.e., the acceptability of these information flows. In this paper, we study privacy norms in SPAs based on Contextual Integrity through a large-scale study with 1,738 participants. We also study the influence that the Contextual Integrity parameters and personal factors have on the privacy norms. Further, we identify the similarities in terms of the Contextual Integrity parameters of the privacy norms studied to to distill more general privacy norms, which could be useful, for instance, to establish suitable privacy defaults in SPA. We finally provide recommendations for SPA and third-party skill providers based on the privacy norms studied.
Phone numbers are intimately connected to our digital lives. People are increasingly required to disclose their phone number in digital spaces, both commercial and personal. While convenient for companies, the pervasive use of phone numbers as user identifiers also poses privacy, security, and access risks for individuals. In order to understand these risks, we present findings from a qualitative online elicitation study with 195 participants about their negative experiences with phone numbers, the consequences they faced, and how those consequences impacted their behavior. Our participants frequently reported experiencing phone number recycling, unwanted exposure, and temporary loss of access to a phone number. Resulting consequences they faced included harassment, account access problems, and privacy invasions. Based on our findings, we discuss service providers' faulty assumptions in the use of phone numbers as user identifiers, problems arising from phone number recycling, and provide design and public policy recommendations for mitigating these issues with phone numbers.