Stereotypes and Gender

会議の名前
CHI 2025
Super Kawaii Vocalics: Amplifying the "Cute" Factor in Computer Voice
要旨

“Kawaii” is the Japanese concept of cute, which carries sociocultural connotations related to social identities and emotional responses. Yet, virtually all work to date has focused on the visual side of kawaii, including in studies of computer agents and social robots. In pursuit of formalizing the new science of kawaii vocalics, we explored what elements of voice relate to kawaii and how they might be manipulated, manually and automatically. We conducted a four-phase study (grand N = 512) with two varieties of computer voices: text-to-speech (TTS) and game character voices. We found kawaii “sweet spots” through manipulation of fundamental and formant frequencies, but only for certain voices and to a certain extent. Findings also suggest a ceiling effect for the kawaii vocalics of certain voices. We offer empirical validation of the preliminary kawaii vocalics model and an elementary method for manipulating kawaii perceptions of computer voice.

キーワード
Computer Voice
Kawaii Computing
Voice Interaction
Voice Assistants
Speech Signal Processing
Video Games
Character Design
Kawaii
Japan
著者
M
Yuto ai
Katie Seaborn
Tomoyasu Nakano
Xin Sun
Yijia Wang
Jun Kato
DOI

10.1145/3706598.3713709

論文URL

https://doi.org/10.1145/3706598.3713709

動画
The Effect of Gender De-biased Recommendations – A User Study on Gender-specific Preferences
要旨

Recommender systems treat users inherently differently. Sometimes, however, personalization turns into discrimination. Gender bias occurs when a system treats users differently based on gender. While most research discusses measures and countermeasures for gender bias, one recent study explored whether users enjoy gender de-biased recommendations. However, its methodology has significant shortcomings; It fails to validate its de-biasing method appropriately and compares biased and unbiased models that differ in key properties. We reproduce the study in a 2x2 between-subjects design with n=800 participants. Moreover, we examine the authors' hypothesis that educating users on gender bias improves their attitude towards de-biasing. We find that the genders perceive de-biasing differently. The female users —the majority group — rate biased recommendations significantly higher while the male users —the minority group — indicate no preference. Educating users on gender bias increased acceptance non-significantly. We consider our contribution vital towards understanding how gender de-biasing affects different user groups.

著者
Thorsten Krause
German Research Center for Artificial Intelligence, Osnabrück, Niedersachsen, Germany
Lorena Göritz
German Research Center for Artificial Intelligence, Osnabrück, Germany
Robin Gratz
German Research Center for Artificial Intelligence, Osnabrück, Germany
DOI

10.1145/3706598.3713155

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713155

動画
Beyond the "Industry Standard": Focusing Gender-Affirming Voice Training Technologies on Individualized Goal Exploration
要旨

Gender-affirming voice training is critical for the transition process for many transgender individuals, enabling their voice to align with their gender identity. Individualized voice goals guide and motivate the voice training journey, but existing voice training technologies fail to define clear goals. We interviewed six voice experts and ten transgender individuals with voice training experience (voice trainees), focusing on how they defined, triangulated, and used voice goals. We found that goal voice exploration involves navigation between descriptive and technical goals, and continuous reevaluation throughout the voice training journey. Our study reveals how goal descriptions, subjective satisfaction, voice examples, and voice modification and training technologies inform goal exploration, and identifies risks of overemphasizing goals. We identified technological implications informed by existing expert and trainee strategies, and provide guidelines for supporting individualized goals throughout the voice training journey based on brainstorming with trainees and experts.

受賞
Honorable Mention
著者
Kassie C. Povinelli
University of Wisconsin-Madison, Madison, Wisconsin, United States
Hanxiu 'Hazel' Zhu
University of Wisconsin-Madison, Madison, Wisconsin, United States
Yuhang Zhao
University of Wisconsin-Madison, Madison, Wisconsin, United States
DOI

10.1145/3706598.3713430

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713430

動画
"Python is for girls!": Masculinity, Femininity, and Queering Inclusion at Hackathons
要旨

This paper explores how queerness intersects with hackathon culture, reinforcing or challenging its masculine norms. By utilizing autoethnographic insights from seven UK hackathons, it reveals that while queerness is visibly celebrated, inclusion remains conditional—accepted only when it aligns with masculine-coded technical authority. Femininity, regardless of the queer identities of those who embody it, is devalued and associated with lesser technical competence. Beyond social dynamics, gendered hierarchies influence programming tools, roles, and physical environments, embedding exclusion within technical culture. Although gender-fluid expressions like cosplay provide moments of subversion, they remain limited by the masculine framework of hackathons. This study contributes to human-computer interaction and feminist technology studies by showing that queerness alone does not dismantle gendered hierarchies. It advocates for moving beyond visibility to actively challenge masculinized definitions of technical legitimacy, promoting alternative, non-exclusionary models of expertise.

受賞
Honorable Mention
著者
Siân Brooke
University of Amsterdam, Amsterdam, Netherlands
DOI

10.1145/3706598.3713235

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713235

動画
A Scoping Review of Gender Stereotypes in Artificial Intelligence
要旨

People often apply gender stereotypes to Artificial Intelligence (AI), and AI design frequently reinforces these stereotypes, perpetuating traditional gender ideologies in state-of-the-art technology. Despite growing interests in investigating this phenomenon, there is little conceptual clarity or consistency regarding what actually constitutes a "gender stereotype" in AI. Therefore, it is critical to provide a more comprehensive image of existing understandings and ongoing discussions of gender stereotypes of AI to guide AI design that reduces the harmful effects of these stereotypes. In doing so, this paper presents a scoping review of over 20 years of research across HCI, HRI and various social science disciplines on how gender stereotypes are applied to AI. We outline the methods and contexts of this growing body of work, develop a typology to clarify these stereotypes, highlight under-explored approaches for future research, and offer guidelines to improve rigor and consistency in this field that may inform responsible AI design in the future.

著者
Wen Duan
Clemson University, Clemson, South Carolina, United States
Lingyuan Li
The University of Texas at Austin, Austin, Texas, United States
Guo Freeman
Clemson University, Clemson, South Carolina, United States
Nathan McNeese
Clemson University , Clemson, South Carolina, United States
DOI

10.1145/3706598.3713093

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713093

動画
Breaking the Binary: A Systematic Review of Gender-Ambiguous Voices in Human-Computer Interaction
要旨

Voice interfaces come in many forms in Human-Computer Interaction (HCI), such as voice assistants and robots. These are often gendered, i.e. they sound masculine or feminine. Recently, there has been a surge in creating gender-ambiguous voices, aiming to make voice interfaces more inclusive and less prone to stereotyping. In this paper, we present the first systematic review of research on gender-ambiguous voices in HCI literature, with an in-depth analysis of 36 articles. We report on the definition and availability of gender-ambiguous voices, creation methods, user perception and evaluation techniques. We conclude with several concrete action points: clarifying key terminology and definitions for terms such as gender-ambiguous, gender-neutral, and non-binary; conducting an initial acoustic analysis of gender-ambiguous voices; taking initial steps toward standardising evaluation metrics for these voices; establishing an open-source repository of gender-ambiguous voices; and developing a framework for their creation and use. These recommendations provide important insights for fostering the development and adoption of inclusive voice technologies.

著者
Martina De Cet
Chalmers University of Technology, Gothenburg, Sweden
Mohammad Obaid
Chalmers University of Technology, Gothenburg, Sweden
Ilaria Torre
Chalmers University of Technology, Gothenburg, Sweden
DOI

10.1145/3706598.3713608

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713608

動画
Doing the Feminist Work in AI: Reflections from an AI Project in Latin America
要旨

The contemporary AI development landscape is dominated by big corporations, lacks diversity, and mostly centres the Global North, or applies extractivist logics in the South. This paper showcases a feminist process of AI development from Latin America, where we created an interactive, AI-powered tool that helps criminal court officers open justice data, addressing a data gap on gender-based violence. Through a collaborative autoethnography, drawing from Latin American feminisms, we unpack and visibilize the feminist work that was required, as a crucial step to counter hegemonic narratives. Foregrounding the subjugated knowledges of our experiences, we offer a concrete example of a feminist approach to AI development grounded in practice. With this, we aim to critically inspire those who consider building technology in service of social justice causes, or who choose to build AI systems otherwise.

受賞
Best Paper
著者
Marianela Ciolfi Felice
KTH Royal Institute of Technology, Stockholm, Sweden
Ivana Feldfeber
DataGénero - Observatorio de Datos con Perspectiva de Género, Buenos Aires, Argentina
Carolina Glasserman Apicella
Universidad Nacional de San Martín, Buenos Aires, Argentina
Yasmín Belén Quiroga
DataGénero - Observatorio de Datos con Perspectiva de Género, Buenos Aires, Argentina
Julián Ansaldo
Collective AI, Buenos Aires, Argentina
Luciano Lapenna
Universidad Tecnológica Nacional , Buenos Aires, Argentina
Santiago Bezchinsky
Universidad de Buenos Aires, Buenos Aires, Argentina
Raul Barriga Rubio
Collective AI, Buenos Aires, Argentina
Mailén García
Universidad Nacional de Mar del Plata, Mar del Plata, Argentina
DOI

10.1145/3706598.3713681

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713681

動画