Intimate partner violence (IPV) is defined as “abuse or aggression that occurs in a romantic relationship." IPV survivors face barriers when help-seeking, such as epistemic injustice -- secondary victimization from dismissal and indifference when disclosing, misdirection, or inappropriate interventions. Survivors may leverage generative AI to make sensitive disclosures and access hermeneutic resources. However, these tools mediate outcomes for IPV survivors through novel manifestations of epistemic injustice. Using mixed-methods, we investigated hermeneutic resource provision by large-language models (LLMs). We evaluated LLM responses to IPV disclosures on three axes: hermeneutic resource provision, readability, and risk. Prompts were derived from a content analysis of IPV and generative AI discussions in 5 abuse subreddits. We contribute a taxonomy of 7 uses of generative AI in the experience of IPV, empirical illustration of epistemic inequity, and considerations for evaluating epistemic harm in generative AI. Content Warning: This study contains descriptions of abuse and violence.
ACM CHI Conference on Human Factors in Computing Systems