A hunt for the Snark: Annotator Diversity in Data Practices
説明

Diversity in datasets is a key component to building responsible AI/ML. Despite this recognition, we know little about the diversity among the annotators involved in data production. We investigated the approaches to annotator diversity through 16 semi-structured interviews and a survey with 44 AI/ML practitioners. While practitioners described nuanced understandings of annotator diversity, they rarely designed dataset production to account for diversity in the annotation process. The lack of action was explained through operational barriers: from the lack of visibility in the annotator hiring process, to the conceptual difficulty in incorporating worker diversity. We argue that such operational barriers and the widespread resistance to accommodating annotator diversity surface a prevailing logic in data practices---where neutrality, objectivity and 'representationalist thinking' dominate. By understanding this logic to be part of a regime of existence, we explore alternative ways of accounting for annotator subjectivity and diversity in data practices.

日本語まとめ
読み込み中…
読み込み中…
(Re-)Distributional Food Justice: Negotiating conflicting views of fairness within a local grassroots community
説明

Sustainable HCI and Human-Food-Interaction research have developing interest in preventing food waste through food sharing. Sustainability requires attention to both the opportunities and challenges associated with the building of food sharing groups engaged in the redistribution of food but also in developing a wider agenda which includes, for instance, the local production of food resources. In this paper, we argue for a better understanding of the different conceptions of ‘fairness’ which inform volunteer and guest practice and in turn mediate community-building efforts. We examine the practices surrounding ‘SharingEvent’ and challenges faced to sustainability by the heterogenous, and sometimes contested, commitments of the people involved. We further consider how ICT provided opportunities for explicit examination of ideological differences concerning what ‘sharing’ might mean. Our findings show that community building is dependent on the negotiation of different values and purposes identified. We derive recommendations for action-oriented researchers ultimately concerned with systemic transformation.

日本語まとめ
読み込み中…
読み込み中…
Out of Context: Investigating the Bias and Fairness Concerns of "Artificial Intelligence as a Service"
説明

``AI as a Service'' (AIaaS) is a rapidly growing market, offering various plug-and-play AI services and tools. AIaaS enables its customers (users)---who may lack the expertise, data, and/or resources to develop their own systems---to easily build and integrate AI capabilities into their applications. Yet, it is known that AI systems can encapsulate biases and inequalities that can have societal impact. This paper argues that the context-sensitive nature of fairness is often incompatible with AIaaS' `one-size-fits-all' approach, leading to issues and tensions. Specifically, we review and systematise the AIaaS space by proposing a taxonomy of AI services based on the levels of autonomy afforded to the user. We then critically examine the different categories of AIaaS, outlining how these services can lead to biases or be otherwise harmful in the context of end-user applications. In doing so, we seek to draw research attention to the challenges of this emerging area.

日本語まとめ
読み込み中…
読み込み中…
Show me a "Male Nurse"! How Gender Bias is Reflected in the Query Formulation of Search Engine Users
説明

Biases in algorithmic systems have led to discrimination against historically disadvantaged groups, including the reinforcement of outdated gender stereotypes. While a substantial body of research addresses biases in algorithms and underlying data, in this work,

we study if and how users themselves reflect these biases in their interactions with systems, which expectedly leads to the further manifestation of biases. More specifically, we investigate the replication of stereotypical gender representations by users in formulating online search queries. Following prototype theory, we define the disproportionate mention of the gender that does not conform to the prototypical representative of a searched domain (e.g., “male nurse”) as an indication of bias. In a pilot study with 224 US participants and a main study with 400 UK participants, we find clear

evidence of gender biases in formulating search queries. We also report the effects of an educative text on user behaviour and highlight the wish of users to learn about bias-mitigating strategies in their interactions with search engines.

日本語まとめ
読み込み中…
読み込み中…
Disentangling Fairness Perceptions in Algorithmic Decision-Making: the Effects of Explanations, Human Oversight, and Contestability.
説明

Recent research claims that information cues and system attributes of algorithmic decision-making processes affect decision subjects’ fairness perceptions. However, little is still known about how these factors interact. This paper presents a user study (N = 267) investigating the individual and combined effects of explanations, human oversight, and contestability on informational and procedural fairness perceptions for high- and low-stakes decisions in a loan approval scenario. We find that explanations and contestability contribute to informational and procedural fairness perceptions, respectively,

but we find no evidence for an effect of human oversight. Our results further show that both informational and procedural fairness perceptions contribute positively to overall fairness perceptions but we do not find an interaction effect between them. A qualitative analysis exposes tensions between information overload and understanding, human involvement and timely decision-making, and accounting for personal circumstances while maintaining procedural consistency. Our results have important design implications for algorithmic decision-making processes that meet decision subjects’ standards of justice.

日本語まとめ
読み込み中…
読み込み中…
Transcending the "Male Code": Implicit Masculine Biases in NLP Contexts
説明

Critical scholarship has elevated the problem of gender bias in data sets used to train virtual assistants (VAs). Most work has focused on explicit biases in language, especially against women, girls, femme-identifying people, and genderqueer folk; implicit associations through word embeddings; and limited models of gender and masculinities, especially toxic masculinities, conflation of sex and gender, and a sex/gender binary framing of the masculine as diametric to the feminine. Yet, we must also interrogate how masculinities are “coded” into language and the assumption of “male” as the linguistic default: implicit masculine biases. To this end, we examined two natural language processing (NLP) data sets. We found that when gendered language was present, so were gender biases and especially masculine biases. Moreover, these biases related in nuanced ways to the NLP context. We offer a new dictionary called AVA that covers ambiguous associations between gendered language and the language of VAs.

日本語まとめ
読み込み中…
読み込み中…