Surveillance of communication between incarcerated and non-incarcerated people has steadily increased, enabled partly by technological advancements.
Third-party vendors control communication tools for most U.S. prisons and jails and offer surveillance capabilities beyond what individual facilities could realistically implement.
Frequent communication with family improves mental health and post-carceral outcomes for incarcerated people, but does discomfort about surveillance affect how their relatives communicate with them?
To explore this and the understanding, attitudes, and reactions to surveillance, we conducted 16 semi-structured interviews with participants who have incarcerated relatives.
Among other findings, we learn that participants communicate despite privacy concerns that they felt helpless to address.
We also observe inaccuracies in participants’ beliefs about surveillance practices.
We discuss implications of inaccurate understandings of surveillance, misaligned incentives between end-users and vendors, how our findings enhance ongoing conversations about carceral justice, and recommendations for more privacy-sensitive
Increasingly, icons are being proposed to concisely convey privacy-related information and choices to users. However, complex privacy concepts can be difficult to communicate. We investigate which icons effectively signal the presence of privacy choices. In a series of user studies, we designed and evaluated icons and accompanying textual descriptions (link texts) conveying choice, opting-out, and sale of personal information --- the latter an opt-out mandated by the California Consumer Privacy Act (CCPA). We identified icon-link text pairings that conveyed the presence of privacy choices without creating misconceptions, with a blue stylized toggle icon paired with "Privacy Options" performing best. The two CCPA-mandated link texts ("Do Not Sell My Personal Information" and "Do Not Sell My Info") accurately communicated the presence of do-not-sell opt-outs with most icons. Our results provide insights for the design of privacy choice indicators and highlight the necessity of incorporating user testing into policy making.
"Notice and choice'' is the predominant approach for data privacy protection today. There is considerable user-centered research on providing effective privacy notices but not enough guidance on designing privacy choices. Recent data privacy regulations worldwide established new requirements for privacy choices, but system practitioners struggle to implement legally compliant privacy choices that also provide users meaningful privacy control.
We constructed a design space for privacy choices based on a user-centered analysis of how people exercise privacy choices in real-world systems.
This work contributes a conceptual framework that considers privacy choice as a user-centered process as well as a taxonomy for practitioners to design meaningful privacy choices in their systems.
We also present a use case of how we leverage the design space to finalize the design decisions for a real-world privacy choice platform, the Internet of Things (IoT) Assistant, to provide meaningful privacy control in the IoT.
The deployment of technologies to track and mitigate the spread COVID-19 has surfaced tensions between individual autonomy and the collective good. The tension reflects a conflict between two central concerns: 1. effectively controlling the spread of the pandemic and 2. respecting individual rights, values, and freedoms. We explored these tensions in an online experiment (n = 389) designed to identify the influence of social orientation and communicative framing on perceptions and expected use of pandemic-tracking apps. We found that social orientation is a statistically significant predictor of app perception and expected use, with the collectivist orientation associated with higher levels and the individualist orientation with lower levels for both aspects. We found interactions between social orientation and communicative framing, as well as a connection between privacy concerns and expected duration of app use. Our findings hold important implications for the design, deployment, and adoption of technology for the public good. Shaping the post-pandemic social contract requires considering the long-term sociocultural impact of these technological solutions.
This paper addresses the question whether the recently proposed approach of concise privacy notices in apps and on websites is effective in raising user awareness. To assess the effectiveness in a realistic setting, we included concise notices in a fictitious but realistic fitness tracking app and asked participants recruited from an online panel to provide their feedback on the usability of the app as a cover story. Importantly, after giving feedback, users were also asked to recall the data practices described in the notices. The experimental setup included the variation of different levels of saliency and riskiness of the privacy notices. Based on a total sample of 2,274 participants, our findings indicate that concise privacy notices are indeed a promising approach to raise user awareness for privacy information when displayed in a salient way, especially in case the notices describe risky data practices. Our results may be helpful for regulators, user advocates and transparency-oriented companies in creating or enforcing better privacy transparency towards average users that do not read traditional privacy policies.
Homomorphic encryption, secure multi-party computation, and differential privacy are part of an emerging class of Privacy Enhancing Technologies which share a common promise: to preserve privacy whilst also obtaining the benefits of computational analysis. Due to their relative novelty, complexity, and opacity, these technologies provoke a variety of novel questions for design and governance. We interviewed researchers, developers, industry leaders, policymakers, and designers involved in their deployment to explore motivations, expectations, perceived opportunities and barriers to adoption. This provided insight into several pertinent challenges facing the adoption of these technologies, including: how they might make a nebulous concept like privacy computationally tractable; how to make them more usable by developers; and how they could be explained and made accountable to stakeholders and wider society. We conclude with implications for the development, deployment, and responsible governance of these privacy-preserving computation techniques.
We present PriView, a concept that allows privacy-invasive devices in the users’ vicinity to be visualised. PriView is motivated by an ever-increasing number of sensors in our environments tracking potentially sensitive data (e.g., audio and video). At the same time, users are oftentimes unaware of this, which violates their privacy. Knowledge about potential recording would enable users to avoid accessing such areas or not to disclose certain information. We built two prototypes: a) a mobile application capable of detecting smart devices in the environment using a thermal camera, and b) VR mockups of six scenarios where PriView might be useful (e.g., a rental apartment). In both, we included several types of visualisation. Results of our lab study (N=24) indicate that users prefer simple, permanent indicators while wishing for detailed visualisations on demand. Our exploration is meant to support future designs of privacy visualisations for varying smart environments.
The COVID-19 pandemic has fueled the development of smartphone applications to assist disease management. Many "corona apps" require widespread adoption to be effective, which has sparked public debates about the privacy, security, and societal implications of government-backed health applications.
We conducted a representative online study in Germany (n = 1003), the US (n = 1003), and China (n = 1019) to investigate user acceptance of corona apps, using a vignette design based on the contextual integrity framework. We explored apps for contact tracing, symptom checks, quarantine enforcement, health certificates, and mere information.
Our results provide insights into data processing practices that foster adoption and reveal significant differences between countries, with user acceptance being highest in China and lowest in the US. Chinese participants prefer the collection of personalized data, while German and US participants favor anonymity. Across countries, contact tracing is viewed more positively than quarantine enforcement, and technical malfunctions negatively impact user acceptance.
The shutdown measures necessary to stop the spread of COVID-19 have amplified the role of technology in intimate partner violence (IPV). Survivors may be forced to endure lockdowns with their abusers, intensifying the dangers of technology-enabled abuse (e.g. stalking, harassment, monitoring, surveillance). They may also be forced to rely on potentially compromised devices to reach support networks: a dangerous dilemma for digital safety. This qualitative study examines how technologists with computer security expertise provided remote assistance to IPV survivors during the pandemic. Findings from 24 consults with survivors and five focus groups with technologist consultants show how remote delivery of technology support services raised three fundamental challenges: (1) ensuring safety for survivors and consultants; (2) assessing device security over a remote connection; and (3) navigating new burdens for consultants, including emotional labor. We highlight implications for HCI researchers creating systems that enable access to remote expert services for vulnerable people.