Ethics of AI

会議の名前
CHI 2024
Fair Machine Guidance to Enhance Fair Decision Making in Biased People
要旨

Teaching unbiased decision-making is crucial for addressing biased decision-making in daily life. Although both raising awareness of personal biases and providing guidance on unbiased decision-making are essential, the latter topics remains under-researched. In this study, we developed and evaluated an AI system aimed at educating individuals on making unbiased decisions using fairness-aware machine learning. In a between-subjects experimental design, 99 participants who were prone to bias performed personal assessment tasks. They were divided into two groups: a) those who received AI guidance for fair decision-making before the task and b) those who received no such guidance but were informed of their biases. The results suggest that although several participants doubted the fairness of the AI system, fair machine guidance prompted them to reassess their views regarding fairness, reflect on their biases, and modify their decision-making criteria. Our findings provide insights into the design of AI systems for guiding fair decision-making in humans.

著者
Mingzhe Yang
The University of Tokyo, Tokyo, Japan
Hiromi Arai
RIKEN, Tokyo, Japan
Naomi Yamashita
NTT, Keihanna, Japan
Yukino Baba
The University of Tokyo, Tokyo, Japan
論文URL

doi.org/10.1145/3613904.3642627

動画
Exploring the Association between Moral Foundations and Judgements of AI Behaviour
要旨

How do individual differences in personal morality affect perceptions and judgments of morally contentious behaviours from AI systems? By applying Moral Foundations Theory (MFT) to the context of AI, this study sought to develop a predictive Bayesian model for assessing moral judgements based on individual differences in moral constitution. Participants (N=240) were asked to assess six different scenarios, carefully designed to elicit reflection on the behaviour of AI systems. Together, with results from the Moral Foundations Questionnaire, we performed both Bayesian modelling and reflexive thematic analysis to investigate the associations between individual differences in moral foundations and judgements of the AI systems. Results revealed a mild association between individual MFT scores and judgments of AI behaviours. Qualitative responses suggested a participant’s technical understanding of AI systems, rather than intrinsic moral values, predominantly influenced their judgments, with those who judged the behaviour as wrong tending to anthropomorphise the AI systems behaviour.

著者
Joe Brailsford
The University of Melbourne, Melbourne, Australia
Frank Vetere
The University of Melbourne, Melbourne, Australia
Eduardo Velloso
University of Melbourne, Melbourne, Victoria, Australia
論文URL

doi.org/10.1145/3613904.3642712

動画
Which Artificial Intelligences Do People Care About Most? A Conjoint Experiment on Moral Consideration
要旨

Many studies have identified particular features of artificial intelligences (AI), such as their autonomy and emotion expression, that affect the extent to which they are treated as subjects of moral consideration. However, there has not yet been a comparison of the relative importance of features as is necessary to design and understand increasingly capable, multi-faceted AI systems. We conducted an online conjoint experiment in which 1,163 participants evaluated descriptions of AIs that varied on these features. All 11 features increased how morally wrong participants considered it to harm the AIs. The largest effects were from human-like physical bodies and prosociality (i.e., emotion expression, emotion recognition, cooperation, and moral judgment). For human-computer interaction designers, the importance of prosociality suggests that, because AIs are often seen as threatening, the highest levels of moral consideration may only be granted if the AI has positive intentions.

著者
Ali Ladak
University of Edinburgh, Edinburgh, Scotland, United Kingdom
Jamie Harris
Sentience Institute, New York, New York, United States
Jacy Reese. Anthis
University of Chicago, Chicago, Illinois, United States
論文URL

doi.org/10.1145/3613904.3642403

動画
The Illusion of Artificial Inclusion
要旨

Human participants play a central role in the development of modern artificial intelligence (AI) technology, in psychological science, and in user research. Recent advances in generative AI have attracted growing interest to the possibility of replacing human participants in these domains with AI surrogates. We survey several such "substitution proposals" to better understand the arguments for and against substituting human participants with modern generative AI. Our scoping review indicates that the recent wave of these proposals is motivated by goals such as reducing the costs of research and development work and increasing the diversity of collected data. However, these proposals ignore and ultimately conflict with foundational values of work with human participants: representation, inclusion, and understanding. This paper critically examines the principles and goals underlying human participation to help chart out paths for future work that truly centers and empowers participants.

著者
William Agnew
CMU, Pittsburgh, Pennsylvania, United States
Stevie Bergman
Google DeepMind, London, United Kingdom
Jennifer Chien
University of California, San Diego, San Diego, California, United States
Mark Diaz
Google Research, New York City, New York, United States
Seliem El-Sayed
Google DeepMind, London, United Kingdom
Jaylen Pittman
Stanford University, Stanford, California, United States
Shakir Mohamed
Google DeepMind, London, United Kingdom
Kevin McKee
Google DeepMind, London, United Kingdom
論文URL

doi.org/10.1145/3613904.3642703

動画
“They only care to show us the wheelchair”: disability representation in text-to-image AI models
要旨

This paper reports on disability representation in images output from text-to-image (T2I) generative AI systems. Through eight focus groups with 25 people with disabilities, we found that models repeatedly presented reductive archetypes for different disabilities. Often these representations reflected broader societal stereotypes and biases, which our participants were concerned to see reproduced through T2I. Our participants discussed further challenges with using these models including the current reliance on prompt engineering to reach satisfactorily diverse results. Finally, they offered suggestions for how to improve disability representation with solutions like showing multiple, heterogeneous images for a single prompt and including the prompt with images generated. Our discussion reflects on tensions and tradeoffs we found among the diverse perspectives shared to inform future research on representation-oriented generative AI system evaluation metrics and development processes.

著者
Kelly Avery Mack
University of Washington, Seattle, Washington, United States
Rida Qadri
Google Research, San Francisco, California, United States
Remi Denton
Google, New York, New York, United States
Shaun K.. Kane
Google Research, Boulder, Colorado, United States
Cynthia L. Bennett
Google, New York, New York, United States
論文URL

doi.org/10.1145/3613904.3642166

動画