Interface Support for Evaluating Disability Bias in AI Generated Images

要旨

Generative text-to-image (T2I) models often output images that have stereotypes of people with disabilities. One possibility to mitigate the risk of these biases is to intervene at the user level, supporting T2I users themselves in being able to identify biases and act accordingly. To understand how to design such support and its potential effectiveness, we implemented two interventions: (1) an education module to inform users of disability stereotypes in T2I images and (2) AI-generated feedback about potential stereotypes in a given image. We evaluated these options alone and in combination through a controlled experiment (N=103) and a qualitative study (N=10). Our results demonstrate that interface-based interventions can help users identify stereotypes, but that people do not always desire to avoid stereotypes. Participants wanted image subjects to "look" disabled, which sometimes inadvertently perpetuated stereotypes. Our results indicate clear ways for T2I interfaces to support users in prompting for and assessing images.

著者
Kelly Avery Mack
University of Washington, Seattle, Washington, United States
Lucy Jiang
University of Washington, Seattle, Washington, United States
Lotus Zhang
University of Washington, Seattle, Washington, United States
Leah Findlater
University of Washington, Seattle, Washington, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Power, Values, and the Politics of Accessibility

P1 - Room 112
7 件の発表
2026-04-13 20:15:00
2026-04-13 21:45:00