Natural Language Generation (NLG) supports the creation of personalized, contextualized, and targeted content. However, the algorithms underpinning NLG have come under scrutiny for reinforcing gender, racial, and other problematic biases. Recent research in NLG seeks to remove these biases through principles of fairness and privacy. Drawing on gender and queer theories from sociology and Science and Technology studies, we consider how NLG can contribute towards the advancement of gender equity in society. We propose a conceptual framework and technical parameters for aligning NLG with feminist HCI qualities. We present three approaches: (1) adhering to current approaches of removing sensitive gender attributes, (2) steering gender differences away from the norm, and (3) queering gender by troubling stereotypes. We discuss the advantages and limitations of these approaches across three hypothetical scenarios; newspaper headlines, job advertisements, and chatbots. We conclude by discussing considerations for implementing this framework and related ethical and equity agendas.
Transgender and non-binary people face substantial challenges in the world, ranging from social inequities and discrimination to lack of access to resources. Though technology cannot fully solve these problems, technological solutions may help to address some of the challenges trans people and communities face. We conducted a series of participatory design sessions (total N = 21 participants) to understand trans people's most pressing challenges and to involve this population in the design process. We detail four types of technologies trans people envision: technologies for changing bodies, technologies for changing appearances / gender expressions, technologies for safety, and technologies for finding resources. We found that centering trans people in the design process enabled inclusive technology design that primarily focused on sharing community resources and prioritized connection between community members.
Online spaces play crucial roles in the lives of most LGBTQ+ people, but can also replicate and exacerbate existing intracommunity tensions and power dynamics, potentially harming subgroups within this marginalized community. Using qualitative probes and interviews, we engaged a diverse group of 25 bi+ (attracted to more than one gender) people to explore these dynamics. We identify two types of intracommunity conflict that bi+ users face (validity and normative conflicts), and a resulting set of what we call latent harms, or coping strategies for dealing with conflict that have delayed negative psychological effects for bi+ users. Using intersectionality as a sensitizing concept to understand shifting power dynamics embedded in sociotechnical contexts, we discuss challenges for future design work including the need to account for intracommunity dynamics within marginalized groups and the utility of disentangling conflict from harm.
Using fitness trackers to generate and collect quantifiable data is a widespread practice aimed at better understanding one’s health and body. The intentional design of fitness trackers as genderless or universal is predicated on masculinist design values and assumptions that do not result in “neutral” devices and systems. Instead, ignoring gender in the design of fitness tracking devices marks a dangerous ongoing inattention to the needs, desires, and experiences of women, as well as transgender and gender non-conforming persons. We utilize duoethnography, a methodology emphasizing personal narrative and dialogue, as a tool that promotes feminist reflexivity in the design and study of fitness tracking technologies. Using the Jawbone UP3 as our object of study, we present findings that illustrate the gendered physical and interface design features and discuss how these features reproduce narrow understandings of gender, health, and lived experiences.
Biases in language influence how we interact with each other and society at large. Language affirming gender stereotypes is often observed in various contexts today, from recommendation letters and Wikipedia entries to fiction novels and movie dialogue. Yet to date, there is little agreement on the methodology to quantify gender stereotypes in natural language (specifically the English language). Common methodology (including those adopted by companies tasked with detecting gender bias) rely on a lexicon approach largely based on the original BSRI study from 1974.<br>In this paper, we reexamine the role of gender stereotype detection in the context of modern tools, by comparatively analyzing efficacy of lexicon-based approaches and end-to-end, ML-based approaches prevalent in state-of-the-art natural language processing systems. Our efforts using a large dataset show that even compared to an updated lexicon-based approach, end-to-end classification approaches are significantly more robust and accurate, even when trained by moderately sized corpora.