Critical Fairness

会議の名前
CHI 2023
A hunt for the Snark: Annotator Diversity in Data Practices
要旨

Diversity in datasets is a key component to building responsible AI/ML. Despite this recognition, we know little about the diversity among the annotators involved in data production. We investigated the approaches to annotator diversity through 16 semi-structured interviews and a survey with 44 AI/ML practitioners. While practitioners described nuanced understandings of annotator diversity, they rarely designed dataset production to account for diversity in the annotation process. The lack of action was explained through operational barriers: from the lack of visibility in the annotator hiring process, to the conceptual difficulty in incorporating worker diversity. We argue that such operational barriers and the widespread resistance to accommodating annotator diversity surface a prevailing logic in data practices---where neutrality, objectivity and 'representationalist thinking' dominate. By understanding this logic to be part of a regime of existence, we explore alternative ways of accounting for annotator subjectivity and diversity in data practices.

受賞
Honorable Mention
著者
Shivani Kapania
Google Research, Bengaluru, India
Alex S. Taylor
City, London, United Kingdom
Ding Wang
Google , Singapore, Singapore
論文URL

https://doi.org/10.1145/3544548.3580645

動画
(Re-)Distributional Food Justice: Negotiating conflicting views of fairness within a local grassroots community
要旨

Sustainable HCI and Human-Food-Interaction research have developing interest in preventing food waste through food sharing. Sustainability requires attention to both the opportunities and challenges associated with the building of food sharing groups engaged in the redistribution of food but also in developing a wider agenda which includes, for instance, the local production of food resources. In this paper, we argue for a better understanding of the different conceptions of ‘fairness’ which inform volunteer and guest practice and in turn mediate community-building efforts. We examine the practices surrounding ‘SharingEvent’ and challenges faced to sustainability by the heterogenous, and sometimes contested, commitments of the people involved. We further consider how ICT provided opportunities for explicit examination of ideological differences concerning what ‘sharing’ might mean. Our findings show that community building is dependent on the negotiation of different values and purposes identified. We derive recommendations for action-oriented researchers ultimately concerned with systemic transformation.

著者
Philip Engelbutzeder
University of Siegen, Siegen, Germany
Yannick Bollmann
TH Köln, Cologne, Germany
Katie Berns
Stockholm University, Stockholm , Sweden
Marvin Landwehr
University of Siegen, Siegen, Germany
Franka Schäfer
University of Siegen, Siegen, Germany
Dave Randall
University of Siegen, Siegen, Germany
Volker Wulf
University of Siegen, Siegen, Germany
論文URL

https://doi.org/10.1145/3544548.3581527

動画
Out of Context: Investigating the Bias and Fairness Concerns of "Artificial Intelligence as a Service"
要旨

``AI as a Service'' (AIaaS) is a rapidly growing market, offering various plug-and-play AI services and tools. AIaaS enables its customers (users)---who may lack the expertise, data, and/or resources to develop their own systems---to easily build and integrate AI capabilities into their applications. Yet, it is known that AI systems can encapsulate biases and inequalities that can have societal impact. This paper argues that the context-sensitive nature of fairness is often incompatible with AIaaS' `one-size-fits-all' approach, leading to issues and tensions. Specifically, we review and systematise the AIaaS space by proposing a taxonomy of AI services based on the levels of autonomy afforded to the user. We then critically examine the different categories of AIaaS, outlining how these services can lead to biases or be otherwise harmful in the context of end-user applications. In doing so, we seek to draw research attention to the challenges of this emerging area.

著者
Kornel Lewicki
University of Cambridge, Cambridge, United Kingdom
Michelle Seng Ah Lee
University of Cambridge, Cambridge, United Kingdom
Jennifer Cobbe
University of Cambridge, Cambridge, United Kingdom
Jat Singh
University of Cambridge, Cambridge, United Kingdom
論文URL

https://doi.org/10.1145/3544548.3581463

動画
Show me a "Male Nurse"! How Gender Bias is Reflected in the Query Formulation of Search Engine Users
要旨

Biases in algorithmic systems have led to discrimination against historically disadvantaged groups, including the reinforcement of outdated gender stereotypes. While a substantial body of research addresses biases in algorithms and underlying data, in this work, we study if and how users themselves reflect these biases in their interactions with systems, which expectedly leads to the further manifestation of biases. More specifically, we investigate the replication of stereotypical gender representations by users in formulating online search queries. Following prototype theory, we define the disproportionate mention of the gender that does not conform to the prototypical representative of a searched domain (e.g., “male nurse”) as an indication of bias. In a pilot study with 224 US participants and a main study with 400 UK participants, we find clear evidence of gender biases in formulating search queries. We also report the effects of an educative text on user behaviour and highlight the wish of users to learn about bias-mitigating strategies in their interactions with search engines.

著者
Simone Kopeinik
Know-Center, Graz, Austria
Martina Mara
Johannes Kepler University Linz, Linz, Austria
Linda Ratz
Know-Center, Graz, Austria
Klara Krieg
University of Innsbruck, Innsbruck, Austria
Markus Schedl
Johannes Kepler University Linz, Linz, Austria
Navid Rekabsaz
Johannes Kepler University, Linz, Austria
論文URL

https://doi.org/10.1145/3544548.3580863

動画
Disentangling Fairness Perceptions in Algorithmic Decision-Making: the Effects of Explanations, Human Oversight, and Contestability.
要旨

Recent research claims that information cues and system attributes of algorithmic decision-making processes affect decision subjects’ fairness perceptions. However, little is still known about how these factors interact. This paper presents a user study (N = 267) investigating the individual and combined effects of explanations, human oversight, and contestability on informational and procedural fairness perceptions for high- and low-stakes decisions in a loan approval scenario. We find that explanations and contestability contribute to informational and procedural fairness perceptions, respectively, but we find no evidence for an effect of human oversight. Our results further show that both informational and procedural fairness perceptions contribute positively to overall fairness perceptions but we do not find an interaction effect between them. A qualitative analysis exposes tensions between information overload and understanding, human involvement and timely decision-making, and accounting for personal circumstances while maintaining procedural consistency. Our results have important design implications for algorithmic decision-making processes that meet decision subjects’ standards of justice.

受賞
Best Paper
著者
Mireia Yurrita
Delft University of Technology, Delft, Netherlands
Tim Draws
TU Delft, Delft, Netherlands
Agathe Balayn
Delft University of Technology, Delft, Netherlands
Dave Murray-Rust
TU Delft, Delft, Zuid Holland, Netherlands
Nava Tintarev
Maastricht University, Maastricht, Netherlands
Alessandro Bozzon
Delft University of Technology, Delft, Netherlands
論文URL

https://doi.org/10.1145/3544548.3581161

動画
Transcending the "Male Code": Implicit Masculine Biases in NLP Contexts
要旨

Critical scholarship has elevated the problem of gender bias in data sets used to train virtual assistants (VAs). Most work has focused on explicit biases in language, especially against women, girls, femme-identifying people, and genderqueer folk; implicit associations through word embeddings; and limited models of gender and masculinities, especially toxic masculinities, conflation of sex and gender, and a sex/gender binary framing of the masculine as diametric to the feminine. Yet, we must also interrogate how masculinities are “coded” into language and the assumption of “male” as the linguistic default: implicit masculine biases. To this end, we examined two natural language processing (NLP) data sets. We found that when gendered language was present, so were gender biases and especially masculine biases. Moreover, these biases related in nuanced ways to the NLP context. We offer a new dictionary called AVA that covers ambiguous associations between gendered language and the language of VAs.

著者
Katie Seaborn
Tokyo Institute of Technology, Tokyo, Japan
Shruti Chandra
University of Waterloo, Kitchener, Ontario, Canada
Thibault Fabre
The University of Tokyo, Tokyo, Japan
論文URL

https://doi.org/10.1145/3544548.3581017

動画