Transcending the "Male Code": Implicit Masculine Biases in NLP Contexts

要旨

Critical scholarship has elevated the problem of gender bias in data sets used to train virtual assistants (VAs). Most work has focused on explicit biases in language, especially against women, girls, femme-identifying people, and genderqueer folk; implicit associations through word embeddings; and limited models of gender and masculinities, especially toxic masculinities, conflation of sex and gender, and a sex/gender binary framing of the masculine as diametric to the feminine. Yet, we must also interrogate how masculinities are “coded” into language and the assumption of “male” as the linguistic default: implicit masculine biases. To this end, we examined two natural language processing (NLP) data sets. We found that when gendered language was present, so were gender biases and especially masculine biases. Moreover, these biases related in nuanced ways to the NLP context. We offer a new dictionary called AVA that covers ambiguous associations between gendered language and the language of VAs.

著者
Katie Seaborn
Tokyo Institute of Technology, Tokyo, Japan
Shruti Chandra
University of Waterloo, Kitchener, Ontario, Canada
Thibault Fabre
The University of Tokyo, Tokyo, Japan
論文URL

https://doi.org/10.1145/3544548.3581017

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Critical Fairness

Hall C
6 件の発表
2023-04-26 23:30:00
2023-04-27 00:55:00