この勉強会は終了しました。ご参加ありがとうございました。
We are constantly surrounded by technology that collects and processes sensitive data, paving the way for privacy violations. Yet, current research investigating technology-facilitated privacy violations in the physical world is scattered and focused on specific scenarios or investigates such violations purely from an expert's perspective. Informed through a large-scale online survey, we first construct a scenario taxonomy based on user-experienced privacy violations in the physical world through technology. We then validate our taxonomy and establish mitigation strategies using interviews and co-design sessions with privacy and security experts. In summary, this work contributes (1) a refined scenario taxonomy for technology-facilitated privacy violations in the physical world, (2) an understanding of how privacy violations manifest in the physical world, (3) a decision tree on how to inform users, and (4) a design space to create notices whenever adequate. With this, we contribute a conceptual framework to enable a privacy-preserving technology-connected world.
Legitimate interest is one of the six grounds for processing data under the European Union's General Data Protection Regulation (GDPR). The flexibility and ambiguity of the term "legitimate interests" can be problematic; coupled with the lack of enforcement from legal authorities and different interpretations from the various data protection authorities, legitimate interests can be taken advantage of as a loophole to collect more user data.
Drawing insights from multiple disciplines, we ran two studies to empirically investigate the deceptive designs being used when legitimate interests are applied in privacy notices, and how user perceptions line up with these practices. We identified six deceptive designs, and found that the ways legitimate interest is applied in practice does not match user expectations.
The release of COVID-19 contact tracing apps was accompanied by a heated public debate with much focus on privacy concerns, e.g., possible government surveillance. Many papers studied people's intended behavior to research potential features and uptake of the apps.
Studies in Germany conducted before the app's release, such as that by Häring et al., showed that privacy was an important factor in the intention to install the app.
We conducted a follow-up study two months post-release to investigate the intention-behavior-gap, see how attitudes changed after the release, and capture reported behavior.
Analyzing a quota sample (n=837) for Germany, we found that fewer participants mentioned privacy concerns post-release, whereas utility now plays a greater role.
We provide further evidence that the results of intention-based studies should be handled with care when used for prediction purposes.
Digital technologies have increasingly integrated into people's lives, continuously capturing their behavior through potentially sensitive data. In the context of voice assistants, there is a misalignment between experts, regulators, and users on whether and what data is `sensitive', partly due to how data is presented to users; as single interactions. We investigate users' perspectives on the sensitivity and intimacy of their Google Assistant speech records, introduced comprehensively as single interactions, patterns, and inferences. We collect speech records through data donation and explore them in collaboration with 17 users during interviews based on predefined data-sharing scenarios. Our results indicate a tipping point in perceived sensitivity and intimacy as participants delve deeper into their data and the information derived from it. We propose a conceptualization of sensitivity and intimacy that accounts for the fuzzy nature of data and must disentangle from it. We discuss the implications of our findings and provide recommendations.
Misconceptions about digital security and privacy topics in the general public frequently lead to insecure behavior. However, little is known about the prevalence and extent of such misconceptions in a global context. In this work, we present the results of the first large-scale survey of a global population on misconceptions: We conducted an online survey with n = 12,351 participants in 12 countries on four continents. By investigating influencing factors of misconceptions around eight common security and privacy topics (including E2EE, Wi-Fi, VPN, and malware), we find the country of residence to be the strongest estimate for holding misconceptions. We also identify differences between non-Western and Western countries, demonstrating the need for region-specific research on user security knowledge, perceptions, and behavior. While we did not observe many outright misconceptions, we did identify a lack of understanding and uncertainty about several fundamental privacy and security topics.
Researchers invested enormous efforts to understand and mitigate the concerns of users as technologies collect their private data. However, users often undermine \emph{other} people's privacy when, e.g., posting other people's photos online, granting mobile applications to access contacts, or using technologies that continuously sense the surrounding. Research to understand technology adoption and behaviors related to collecting and sharing data about non-users has been severely lacking. An essential step to progress in this direction is to identify and quantify factors that affect technology's use. Toward this goal, we propose and validate a psychometric scale to measure how much an individual values \emph{other} people's privacy. We theoretically grounded the appropriateness and relevance of the construct and empirically demonstrated the scale's internal consistency and validity. This scale will advance the field by enabling researchers to predict behaviors, design adaptive privacy-enhancing technologies, and develop interventions to raise awareness and mitigate privacy risks.