“Kawaii” is the Japanese concept of cute, which carries sociocultural connotations related to social identities and emotional responses. Yet, virtually all work to date has focused on the visual side of kawaii, including in studies of computer agents and social robots. In pursuit of formalizing the new science of kawaii vocalics, we explored what elements of voice relate to kawaii and how they might be manipulated, manually and automatically. We conducted a four-phase study (grand N = 512) with two varieties of computer voices: text-to-speech (TTS) and game character voices. We found kawaii “sweet spots” through manipulation of fundamental and formant frequencies, but only for certain voices and to a certain extent. Findings also suggest a ceiling effect for the kawaii vocalics of certain voices. We offer empirical validation of the preliminary kawaii vocalics model and an elementary method for manipulating kawaii perceptions of computer voice.
Recommender systems treat users inherently differently. Sometimes, however, personalization turns into discrimination. Gender bias occurs when a system treats users differently based on gender. While most research discusses measures and countermeasures for gender bias, one recent study explored whether users enjoy gender de-biased recommendations. However, its methodology has significant shortcomings; It fails to validate its de-biasing method appropriately and compares biased and unbiased models that differ in key properties. We reproduce the study in a 2x2 between-subjects design with n=800 participants. Moreover, we examine the authors' hypothesis that educating users on gender bias improves their attitude towards de-biasing. We find that the genders perceive de-biasing differently. The female users —the majority group — rate biased recommendations significantly higher while the male users —the minority group — indicate no preference. Educating users on gender bias increased acceptance non-significantly. We consider our contribution vital towards understanding how gender de-biasing affects different user groups.
Gender-affirming voice training is critical for the transition process for many transgender individuals, enabling their voice to align with their gender identity. Individualized voice goals guide and motivate the voice training journey, but existing voice training technologies fail to define clear goals. We interviewed six voice experts and ten transgender individuals with voice training experience (voice trainees), focusing on how they defined, triangulated, and used voice goals. We found that goal voice exploration involves navigation between descriptive and technical goals, and continuous reevaluation throughout the voice training journey. Our study reveals how goal descriptions, subjective satisfaction, voice examples, and voice modification and training technologies inform goal exploration, and identifies risks of overemphasizing goals. We identified technological implications informed by existing expert and trainee strategies, and provide guidelines for supporting individualized goals throughout the voice training journey based on brainstorming with trainees and experts.
This paper explores how queerness intersects with hackathon culture, reinforcing or challenging its masculine norms. By utilizing autoethnographic insights from seven UK hackathons, it reveals that while queerness is visibly celebrated, inclusion remains conditional—accepted only when it aligns with masculine-coded technical authority. Femininity, regardless of the queer identities of those who embody it, is devalued and associated with lesser technical competence. Beyond social dynamics, gendered hierarchies influence programming tools, roles, and physical environments, embedding exclusion within technical culture. Although gender-fluid expressions like cosplay provide moments of subversion, they remain limited by the masculine framework of hackathons. This study contributes to human-computer interaction and feminist technology studies by showing that queerness alone does not dismantle gendered hierarchies. It advocates for moving beyond visibility to actively challenge masculinized definitions of technical legitimacy, promoting alternative, non-exclusionary models of expertise.
People often apply gender stereotypes to Artificial Intelligence (AI), and AI design frequently reinforces these stereotypes, perpetuating traditional gender ideologies in state-of-the-art technology. Despite growing interests in investigating this phenomenon, there is little conceptual clarity or consistency regarding what actually constitutes a "gender stereotype" in AI. Therefore, it is critical to provide a more comprehensive image of existing understandings and ongoing discussions of gender stereotypes of AI to guide AI design that reduces the harmful effects of these stereotypes. In doing so, this paper presents a scoping review of over 20 years of research across HCI, HRI and various social science disciplines on how gender stereotypes are applied to AI. We outline the methods and contexts of this growing body of work, develop a typology to clarify these stereotypes, highlight under-explored approaches for future research, and offer guidelines to improve rigor and consistency in this field that may inform responsible AI design in the future.
Voice interfaces come in many forms in Human-Computer Interaction (HCI), such as voice assistants and robots. These are often gendered, i.e. they sound masculine or feminine. Recently, there has been a surge in creating gender-ambiguous voices, aiming to make voice interfaces more inclusive and less prone to stereotyping. In this paper, we present the first systematic review of research on gender-ambiguous voices in HCI literature, with an in-depth analysis of 36 articles. We report on the definition and availability of gender-ambiguous voices, creation methods, user perception and evaluation techniques. We conclude with several concrete action points: clarifying key terminology and definitions for terms such as gender-ambiguous, gender-neutral, and non-binary; conducting an initial acoustic analysis of gender-ambiguous voices; taking initial steps toward standardising evaluation metrics for these voices; establishing an open-source repository of gender-ambiguous voices; and developing a framework for their creation and use. These recommendations provide important insights for fostering the development and adoption of inclusive voice technologies.
The contemporary AI development landscape is dominated by big corporations, lacks diversity, and mostly centres the Global North, or applies extractivist logics in the South. This paper showcases a feminist process of AI development from Latin America, where we created an interactive, AI-powered tool that helps criminal court officers open justice data, addressing a data gap on gender-based violence. Through a collaborative autoethnography, drawing from Latin American feminisms, we unpack and visibilize the feminist work that was required, as a crucial step to counter hegemonic narratives. Foregrounding the subjugated knowledges of our experiences, we offer a concrete example of a feminist approach to AI development grounded in practice. With this, we aim to critically inspire those who consider building technology in service of social justice causes, or who choose to build AI systems otherwise.