Large Language Models (LLMs) like ChatGPT, used by over 200 million people monthly, are increasingly applied in disability contexts, including autism research. However, there has been limited exploration of the potential biases these models hold about autistic people. To explore what biases ChatGPT demonstrates about autistic people, we prompted GPT-3.5 to create three personas, choose one to be autistic, and explain its reasoning for this choice and any suggested changes to the persona description. Our quantitative analysis of the chosen personas indicates that gender and profession influenced GPT's choices. Additionally, our qualitative analysis revealed ChatGPT's tendency to highlight the importance of representation while simultaneously perpetuating mostly negative biases about autistic people, illustrating a "bias paradox," a concept adapted from feminist studies. By applying this concept to LLMs, we provide a lens through which researchers might identify, understand, and address fundamental challenges in the development of responsible and inclusive AI.
https://dl.acm.org/doi/10.1145/3706598.3713420
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)