"As an Autistic Person Myself:" The Bias Paradox Around Autism in LLMs

要旨

Large Language Models (LLMs) like ChatGPT, used by over 200 million people monthly, are increasingly applied in disability contexts, including autism research. However, there has been limited exploration of the potential biases these models hold about autistic people. To explore what biases ChatGPT demonstrates about autistic people, we prompted GPT-3.5 to create three personas, choose one to be autistic, and explain its reasoning for this choice and any suggested changes to the persona description. Our quantitative analysis of the chosen personas indicates that gender and profession influenced GPT's choices. Additionally, our qualitative analysis revealed ChatGPT's tendency to highlight the importance of representation while simultaneously perpetuating mostly negative biases about autistic people, illustrating a "bias paradox," a concept adapted from feminist studies. By applying this concept to LLMs, we provide a lens through which researchers might identify, understand, and address fundamental challenges in the development of responsible and inclusive AI.

著者
Sohyeon Park
University of California, Irvine, Irvine, California, United States
Aehong Min
University of California, Irvine, Irvine, California, United States
Jesus Armando. Beltran
California State University, Los Angeles, Los Angeles, California, United States
Gillian R. Hayes
University of California, Irvine, Irvine, California, United States
DOI

10.1145/3706598.3713420

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713420

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Neurological Considerations

G414+G415
7 件の発表
2025-04-28 20:10:00
2025-04-28 21:40:00
日本語まとめ
読み込み中…