Obtaining meaningful and informed consent from users is essential for ensuring autonomy and control over one's data. Notice and consent, the standard for collecting consent, has been criticized. While other individualized solutions have been proposed, this paper argues that a collective approach to consent is worth exploring. First, individual consent is not always feasible to collect for all data collection scenarios. Second, harms resulting from data processing are often communal in nature, given the interconnected nature of some data. Finally, ensuring truly informed consent for every individual has proven impractical. We propose collective consent, operationalized through consent assemblies, as one alternative framework. We establish collective consent's theoretical foundations and use speculative design to envision consent assemblies leveraging deliberative mini-publics. We present two vignettes: i) replacing notice and consent, and ii) collecting consent for GenAI model training. Our paper employs future backcasting to identify the requirements for realizing collective consent and explores its potential applications in contexts where individual consent is infeasible.
Taking and sharing photos is a routine practice in childcare institutions, used to document children’s learning, communicate with families, and support marketing. These practices are typically regulated through consent forms, the institutional mechanism for authorizing photography and media use. While prior research has examined parents’ photo-taking and sharing, little is known about consent in institutional childcare, where formal policies and non-parental figures (e.g., staff and administrators) shape children’s privacy in distinct ways. To investigate this, we analyzed 42 consent forms and conducted 21 semi-structured interviews with parents, educators, and administrators in U.S.-based childcare institutions. Our findings reveal that consent forms serve as procedural, one-time agreements rather than meaningful safeguards. Parents navigate consent pragmatically amidst structural precarity and power asymmetries, while staff performs the unseen labor of consent enforcement. We conclude with implications for reimagining consent and designing usable institutional mechanisms that support children’s privacy and safety in practice.
Internet-connected medical devices introduce complex cybersecurity risks that challenge the established practice of informed consent. It remains unclear how patients weigh these abstract, dynamic threats against concrete clinical benefits. We present findings from a large-scale (N=2,666) vignette-based experiment designed to uncover the factors driving patient decision-making. Participants chose whether to adopt a connected pacemaker, weighing its enhanced clinical outcomes against potential vulnerabilities. We systematically varied communication factors, including the source of risk information (e.g., clinician, FDA), risk framing, and the details of a subsequent vulnerability disclosure. Our results reveal patient choice hinges on pre-existing physician trust and risk framing. We did not observe any effect from the information's source. We also find initial choices act as powerful anchors, and that detailed disclosures increase security confidence. Our work provides crucial empirical evidence on this trade-off, offering actionable guidance to better support informed consent for life-critical connected technologies.
Recently, red teaming, with roots in security, has become a key evaluative approach to ensure the safety and reliability of Generative Artificial Intelligence. However, most existing work emphasizes technical benchmarks and attack success rates, leaving the socio-technical practices of how red teaming datasets are defined, created, and evaluated under-examined. Drawing on 22 interviews with practitioners who design and evaluate red teaming datasets, we examine the data practices and standards that underpin this work. Because adversarial datasets determine the scope and accuracy of model evaluations, they are critical artifacts for assessing potential harms from large language models. Our contributions are first, empirical evidence of practitioners conceptualizing red teaming and developing and evaluating red teaming datasets. Second, we reflect on how practitioners’ conceptualization of risk leads to overlooking the context, interaction type, and user specificity. We conclude with three opportunities for HCI researchers to expand the conceptualization and data practices for red-teaming.
Smart garments and circular-economy endeavours nurture imaginaries of sustainable futures. However, these trends' intersection involves privacy risks: when the smart garments are recycled, their biometric data should be erased, to protect earlier users' privacy. Unfortunately, this data erasure may not always occur. To examine privacy perceptions connected with reused smart garments, we conducted a two-week speculative enactment, preceded with systemic future scenario development. Eight participants wore a reused smart shirt prototype that seemed to leak a prior user's data. The participants initially disregarded privacy problems associated with smart-garment reuse but changed their perceptions upon recognising risks of surveillance and of private data's disclosure to the garment's future users. Discussing the systemic future scenarios with participants spotlighted implications for future privacy related to data ownership, the digital divide, and environmental authoritarianism. These findings call for anticipatory approaches that heighten sensitivity to uncertainty and implicit assumptions in researching privacy in possible futures.
Smart, pervasive Augmented Reality (AR) glasses are making their way out of the research labs. Many big tech companies are working on developing these promising next-generation interaction devices, and apps and services around them. When integrated with emerging face recognition technologies (FRT), Pervasive AR glasses can become powerful everyday tools. However, little is known about their acceptance, perceptual, and ethical ramifications. To address this, we developed a Pervasive AR technology probe with functional FRT and conducted an empirical study with 54 participants in a public environment. We collected interview data regarding perceived ethics about combining Pervasive AR with FRT. We developed five dominant themes informing the potential concerns and characteristics. Based on those findings, we propose to develop future Pervasive AR systems around principles of symmetry and consent---what we call a Kantian approach. We hope that our research will inform the design and development of near-future smart glasses.
Extending prior HCI and CSCW research on the invisible challenges domestic care workers face, we examine how childcare workers, particularly nannies, experience and manage workplace risks. Drawing on interviews with 21 nannies, we identified three interrelated risks—physical, emotional, and financial—arising from structural and relational constraints in employers’ homes. Through the lens of risk work, we show how these multi-dimensional constraints create tensions that hinder nannies' direct risk mitigation strategies. This often compels them to prioritize indirect risk management to avoid tensions, leaving risks themselves unresolved. Our study highlights the need for future research and sociotechnical interventions that address domestic childcare workers’ unique constraints, identify their coping strategies through a risk work lens, and illuminate the risks obscured by indirect coping. We further call for recognizing the limitations of both personal tools and employer-centered home technologies, and propose worker-centered, reciprocal interventions as well as virtual and psychological separation in the workplace.