FemTech, including apps for fertility, menstruation, and menopause, increasingly shapes how users manage intimate aspects of their health. Yet these apps are often built on opaque commercial models, raising ethical concerns about consent, privacy, and misuse of sensitive health data. While prior work has documented these risks, less is known about how users perceive and negotiate commercial data practices in FemTech apps. We conducted an online survey with 187 participants, combining factorial vignettes with provotypes--- interface prototypes designed to provoke reflection--- to examine user boundaries and discomforts around FemTech data collection and commercial use. Participants drew sharp distinctions across data types, resisting peripheral data collection and pervasive tracking. Commercial practices were often judged conditionally: tolerated only when functionally relevant. Notably, our provotypes, even under exaggerated transparency, elicited more forgiving responses to commercial practices compared to brief text descriptions in the vignettes. We discuss implications for designing transparent, accountable, and user-aligned FemTech.
The rapid adoption of generative AI (GenAI) chatbots has reshaped access to sexual and reproductive health (SRH) information, particularly following the overturning of Roe v. Wade, as individuals assigned female at birth increasingly turn to online sources. However, existing research remains largely model-centered, paying limited attention to user privacy and safety. We conducted semi-structured interviews with 18 U.S.-based participants from both restrictive and non-restrictive states who had used GenAI chatbots to seek SRH information. Adoption was influenced by perceived utility, usability, credibility, accessibility, and anthropomorphism, and many participants disclosed sensitive personal SRH details. Participants identified multiple privacy risks, including excessive data collection, government surveillance, profiling, model training, and data commodification. While most participants accepted these risks in exchange for perceived utility, abortion-related queries elicited heightened safety concerns. Few participants employed protective strategies beyond minimizing disclosures or deleting data. Based on these findings, we offer design and policy recommendations—such as health-specific features and stronger moderation practices—to enhance privacy and safety in GenAI-supported SRH information seeking.
Technology-facilitated abuse (TFA) is a widespread and harmful dimension of interpersonal violence. Documenting TFA can unlock mitigative actions for survivors such as legal orders of protection, but existing documentation tools are insufficient. This paper considers whether a trauma-informed design approach could yield more effective methods for documenting TFA and how, concretely, to approach trauma-informed digital evidence collection. Toward this goal, we use trauma-informed methods to design a new tool, Sherloc, that helps identify and document TFA within tech clinic interventions. We evaluated Sherloc in feedback sessions with legal experts, then in a small pilot program in the U.S. From our design inquiry, we present novel guidelines for trauma-informed digital evidence collection. We call on HCI researchers to build on our work to envision trauma-informed methods of documenting TFA.
As more young women in China live alone, they navigate entangled privacy, security, and safety (PSS) risks across smart homes, online platforms, and public infrastructures. Drawing on six participatory threat modeling (PTM) workshops (n = 33), we present a human-centered threat model that illustrates how digitally facilitated physical violence, digital harassment and scams, and pervasive surveillance by individuals, companies, and the state are interconnected and mutually reinforcing. We also document four mitigation strategies employed by participants: smart home device configurations, boundary management, sociocultural practices, and social media tactics--each of which can introduce new vulnerabilities and emotional burdens. Based on these insights, we developed a digital PSS guidebook for young women living alone (YWLA) in China. We further propose actionable design implications for smart home devices and social media platforms, along with policy and legal recommendations and directions for educational interventions.
Harassment impacts the safety and well-being of young adults in Pakistan. Prior research has largely focused on women, often imposing external definitions of harm and overlooking how individuals themselves understand and respond to harassment. This study examines how Pakistani young adults define, experience, and cope with harassment. Drawing on 33 semi-structured interviews guided by a human-centered threat modeling framework, we surface context-specific threat models. Participants’ definitions of harassment were shaped by gender norms, religious values, and moral judgments. Women described harassment as a routine part of life, tied to public visibility, modesty norms. Men also reported harassment, though framed by different dynamics such as pressure to maintain control, avoid vulnerability, and conform to masculinity. Across participants, formal reporting pathways were viewed as untrustworthy or unsafe. Our findings highlight the need for interventions that reflect local definitions of harm, address relational adversaries, and support safety within sociocultural contexts.
Intimate partner violence (IPV) is defined as “abuse or aggression that occurs in a romantic relationship." IPV survivors face barriers when help-seeking, such as epistemic injustice -- secondary victimization from dismissal and indifference when disclosing, misdirection, or inappropriate interventions. Survivors may leverage generative AI to make sensitive disclosures and access hermeneutic resources. However, these tools mediate outcomes for IPV survivors through novel manifestations of epistemic injustice. Using mixed-methods, we investigated hermeneutic resource provision by large-language models (LLMs). We evaluated LLM responses to IPV disclosures on three axes: hermeneutic resource provision, readability, and risk. Prompts were derived from a content analysis of IPV and generative AI discussions in 5 abuse subreddits. We contribute a taxonomy of 7 uses of generative AI in the experience of IPV, empirical illustration of epistemic inequity, and considerations for evaluating epistemic harm in generative AI. Content Warning: This study contains descriptions of abuse and violence.
This paper presents a feminist autoethnographic critique of technology and research on gender-based violence, grounded in my lived experience and current work as an HCI researcher engaged in community-led design on forced marriage and broader gender-based violence. Through chronological narratives, I recount encounters with digital technologies during help-seeking, from early online searches to the quiet work of rebuilding life, alongside reflections from my position as a researcher embedded in my own community and observing how HCI engages with it. These accounts reveal how digital interventions often fail to align with the realities of those affected, whether by prematurely pushing legal solutions, vanishing after research funding, or reinforcing harmful labels such as “victim.” I argue for HCI approaches that sustain tools beyond prototypes, translate research into practice, and attend to language and power, calling for research and design that begins with those most impacted: \textit{not spoken for, but speaking}.