この勉強会は終了しました。ご参加ありがとうございました。
Queer identity research largely overlooks wearable technology. Most work exploring sociocultural considerations of wearable technology determines what is “socially acceptable” based on privileged bodies, excluding queer perspectives. We address this by establishing the foundations of a knowledge base for wearables that support queer expression. We conducted a two-phase qualitative study exploring queer expressive practices and wearable technologies through 16 semi-structured interviews and 15 body mapping workshops with the queer community. We observed themes framing the queer community’s understanding of queer expression, wearable technology, and wearable technology for queer users. Providing design considerations and discussions on the potential of our methods, our work enables the creation of wearable technologies that offer meaningful user experiences for the queer community.
CAUTION: This paper discusses topics that could trigger those with histories of homophobia, transphobia, gender dysphoria, racism or eating disorders. Please use caution when engaging with this work.
Design toolkits that aim to to promote equity offer designers simplified approaches to creating more equitable technology. However, it is important to understand how equity is conceptualized in practice. As a curated collection of methods, toolkits signal how equity is imagined in design. In this paper, we perform a qualitative analysis of 17 design toolkits related to equity. We explore alternative design approaches that address inequity in design. We evaluate whether equity toolkits align with calls for changes to design practice, as well as Nancy Fraser's dimensions of justice. Finally, we find that design toolkits focus on the ‘digital divide’ rather than redistributing world-building power, and thus continue to keep design power with professional designers. We also find that ‘design thinking’ continues to influence design toolkits. Furthermore, the simplicity of toolkits does not engage with the complexities that shape equity in practice. We conclude with suggestions to help researchers and designers rethink design toolkits.
Social media platforms use content moderation to reduce and remove problematic content. However, much of the discourse on the benefits and pitfalls of moderation has so far focused on users in the West. Little is known about how users in the Global South interact with the humans and algorithms behind opaque moderation systems. To fill this gap, we conducted interviews with 19 Bangladeshi social media users who received restrictions for violating community standards on Facebook. We found that the users perceived the underlying human-AI infrastructure to imbibe coloniality in the form of amplifying power relations, centering Western norms, and perpetuating historical injustices and erasure of minoritized expressions. Based on the findings, we establish that the current moderation systems propagate historical power relations and patterns of oppression, and discuss ways to rethink moderation in a fundamentally decolonial way.
As voice assistant usage continues to grow, their homogeneity
becomes even more problematic with the UNESCO report, “I’d Blush
if I could” showing that designing only feminine voice assistants
encourages negative behavior, both with virtual assistants and with
real people [3]. While masculine text-to-speech (TTS) voices exist,
ones that cover the full range of gender presentations, such as
non-binary or gender-ambiguous voices are largely missing. In this
paper, we present a method of creating a non-binary TTS voice
and an example voice, Sam, created with input from the non-binary
and transgender communities. We have open-sourced the resulting
voice, along with the process and data used to create it. Finally, we
present results from a large-scale survey showing that non-binary
individuals are more likely to prefer a non-binary voice assistant
compared to cisgendered individuals and discuss differences across
age and gender.
Homelessness presents a long-standing problem worldwide. Like other welfare services, homeless services have gained increased traction in Machine Learning (ML) research. \textcolor{black}{Unhoused} persons are vulnerable and using their data in the ML pipeline \textcolor{black}{raises serious concerns about the unintended harms and consequences of prioritizing different ML values}. To address this, we conducted a critical analysis of \textbf{40} research papers identified through a systematic literature review in ML homelessness service provision research. \textcolor{black}{We found} that the values of \textit{novelty}, \textit{performance}, and \textit{identifying limitations} were uplifted in these papers, whereas (in)\textit{efficiency}, (low/high) \textit{cost}, \textit{fast}, (violated) \textit{privacy}, \textcolor{black}{and} (homeless condition) \textit{reproducibility} \textcolor{black}{values }\textcolor{black}{collapse}. \textcolor{black}{Consequently}, \textcolor{black}{unhoused} persons were lost \textcolor{black}{(i.e., humans were deprioritized)} at multi-level ML abstraction of \textbf{predictors}, \textbf{categories}, and \textbf{algorithms}. Our findings illuminate potential pathways forward at the intersection of data science, HCI and STS by situating humans at the center to support this vulnerable community.
Engaging diverse participants in HCI research is critical for creating safe, inclusive, and equitable technology. However, there is a lack of guidelines on when, why, and how HCI researchers collect study participants' race and ethnicity. Our paper aims to take the first step toward such guidelines by providing a systematic review and discussion of the status quo of race and ethnicity data collection in HCI. Through an analysis of 2016--2021 CHI proceedings and a survey with 15 authors who published in these proceedings, we found that reporting race and ethnicity of participants is very rare (<3\%) and that researchers are far from consensus. Drawing from multidisciplinary literature and our findings, we devise considerations for HCI researchers to decide why, when, and from whom to collect race and ethnicity data. For truly inclusive, equitable technologies, we encourage deliberate decisions rather than default omissions.