この勉強会は終了しました。ご参加ありがとうございました。
Changing a Twitter account’s privacy setting between public and protected changes the visibility of past tweets. By inspecting the privacy setting of more than 100K Twitter users over 3 months, we noticed that over 40% of those users changed their privacy setting at least once with around 16% changing it over 5 times. This observation motivated us to explore the reasons why people switch their privacy settings. We studied these switching phenomena quantitatively by comparing the tweeting behaviour of users when public vs protected, and qualitatively using two follow-up surveys (n=100, n=324) to understand potential reasoning behind the observed behaviours. Our quantitative analysis shows that users who switch privacy settings mention others and share hashtags more when their setting is public. Our surveys highlighted that users turn protected to share personal content and regulate boundaries while they turn public to interact with others in ways the protected setting prevents.
People feel concerned, angry, and powerless when subjected to surveillance, data breaches and other privacy-violating experiences with institutions (PVEIs). Collective action may empower groups of people affected by a PVEI to jointly demand redress, but a necessary first step is for the collective to agree on demands. We designed a sensitizing prototype to explore how to shepherd a collective to generate a unified set of demands for redress in response to a triggering PVEI. We found that collectives can converge on high-priority concerns and demands for redress, and that many of their demands indicated preferences for broad reform. We then gathered a panel of security and privacy experts to react to the collective’s demands. Experts were dismissive, preferring incremental measures that cleanly mapped onto existing legal structures. We argue this misalignment may help uphold the power chasm between data-harvesting institutions and the individuals whose personal data they monetize.
There is limited information regarding how users employ password managers in the wild and why they use them in that manner. To address this knowledge gap, we conduct observational interviews with 32 password manager users. Using grounded theory, we identify four theories describing the processes and rationale behind participants' usage of password managers. We find that many users simultaneously use both a browser-based and a third-party manager, using each as a backup for the other, with this new paradigm having intriguing usability and security implications. Users also eschew generated passwords because these passwords are challenging to enter and remember when the manager is unavailable, necessitating new generators that create easy-to-enter and remember passwords. Additionally, the credential audits provided by most managers overwhelm users, limiting their utility and indicating a need for more proactive and streamlined notification systems. We also discuss mobile usage, adoption and promotion, and other related topics.
Users avoid engaging with privacy policies because they are lengthy and complex, making it challenging to retrieve relevant information. In response, research proposed contextual privacy policies (CPPs) that embed relevant privacy information directly into their affiliated contexts. To date, CPPs are limited to concept showcases. This work evolves CPPs into a production tool that automatically extracts and displays concise policy information. We first evaluated the technical functionality on the US's 500 most visited websites with 59 participants. Based on our results, we further revised the tool to deploy it in the wild with 11 participants over ten days. We found that our tool is effective at embedding CPP information on websites. Moreover, we found that the tool's usage led to more reflective privacy behavior, making CPPs powerful in helping users understand the consequences of their online activities. We contribute design implications around CPP presentation to inform future systems design.
People share photos on Social Networks Sites, but at the same time want to keep some photo content private. This tension between sharing and privacy has led researchers to try to solve this problem, but without considering users’ needs. To fill this gap, we present a novel interface that expands privacy options beyond recipient-control (R). Our system can also flag sensitive content (C) and obfuscate (O) it (RCO). We then describe the results of a two-step experiment that compares RCO with two alternative interfaces - (R) which mimics existing SNS privacy options by providing recipient control, and a system that in addition to recipient control also flags sensitive content (RC). Results suggest RC performs worse than R regarding perceived privacy risks, willingness to share, and user experience. However, RCO, which provides obfuscation options, restores these metrics to the same levels as R. We conclude by providing insights on system implementation.