Privacy and surveillance are central features of public discourse around use of computing systems. As the systems we design and study are increasingly used and regulated as potential instruments of surveillance, HCI researchers—even those whose focus is not privacy—find themselves needing to understand privacy in their work. Concepts like contextual integrity and boundary regulation have become touchstones for thinking about privacy in HCI. In this paper, we draw on HCI and privacy literature to understand the limitations of commonly used theories and examine their assumptions, politics, strengths, and weaknesses. We use a case study from the HCI literature to illustrate conceptual gaps in existing frameworks where privacy requirements can fall through. Finally, we advocate vulnerability as a core concept for privacy theorizing and examine how feminist, queer-Marxist, and intersectional thinking may augment our existing repertoire of privacy theories to create a more inclusive scholarship and design practice.
Determining which photos are sensitive is difficult. Although emerging computer vision systems can label content items, previous attempts to distinguish private or sensitive content fall short. There is no human-centered taxonomy that describes what content is sensitive or how sharing preferences for content differs across recipients. To fill this gap, we introduce a new sensitive content elicitation method which surmounts limitations of previous approaches, and, using this new method, collected sensitive content from 116 participants. We also recorded participants' sharing preferences with 20 recipient groups. Next, we conducted a card sort to surface user-defined categories of sensitive content. Using data from these studies, we generated a taxonomy that identifies 28 categories of sensitive content. We also establish how sharing preferences for content differs across groups of recipients. This taxonomy can serve as a framework for understanding photo privacy, which can, in turn, inform new photo privacy protection mechanisms.
https://doi.org/10.1145/3313831.3376498
Data brokers and advertisers increasingly collect data in one context and use it in another. When users encounter a misuse of their data, do they subsequently disclose less information? We report on human-subjects experiments with 25 in-person and 280 online participants. First, participants provided personal information amidst distractor questions. A week later, while participants completed another survey, they received either a robotext or online banner ad seemingly unrelated to the study. Half of the participants received an ad containing their name, partner's name, preferred cuisine, and location; others received a generic ad. We measured how many of 43 potentially invasive questions participants subsequently chose to answer. Participants reacted negatively to the personalized ad, yet answered nearly all invasive questions accurately. We unpack our results relative to the privacy paradox, contextual integrity, and power dynamics in crowdworker platforms.
As people's offline and online lives become increasingly entwined, the sensitivity of personal information disclosed online is increasing. Disclosures often occur through structured disclosure fields (e.g., drop-down lists). Prior research suggests these fields may limit privacy, with non-disclosing users being presumed to be hiding undesirable information. We investigated this around HIV status disclosure in online dating apps used by men who have sex with men. Our online study asked participants (N=183) to rate profiles where HIV status was either disclosed or undisclosed. We tested three designs for displaying undisclosed fields. Visibility of undisclosed fields had a significant effect on the way profiles were rated, and other profile information (e.g., ethnicity) could affect inferences that develop around undisclosed information. Our research highlights complexities around designing for non-disclosure and questions the voluntary nature of these fields. Further work is outlined to ensure disclosure control is appropriately implemented around online sensitive information disclosures.
Online users' attitudes toward privacy are context-dependent. Studies show that contextual cues are quite influential in motivating users to disclose personal information. Increasingly, these cues are embedded in the interface, but the mechanisms of their effects (e.g., unprofessional design contributing to more disclosure) are not fully understood. We posit that each cue triggers a specific "cognitive heuristic" that provides a rationale for decision-making. Using a national survey (N = 786) that elicited participants' disclosure intentions in common online scenarios, we identify 12 distinct heuristics relevant to privacy, and demonstrate that they are systematically associated with information disclosure. Data show that those with a higher accessibility to a given heuristic are more likely to disclose information. Design implications for protection of online privacy and security are discussed.