Red-teaming—where adversarial prompts are crafted to expose harmful behaviors and assess risks—offers a dynamic approach to surfacing underlying stereotypical bias in large language models. Because such subtle harms are best recognized by those with lived experience, involving targets of stereotyping as red-teamers is essential. However, critical challenges remain in leveraging their lived experience for red-teaming while safeguarding psychological well-being. We conducted an empirical study of participatory red-teaming with 20 individuals stigmatized by stereotypes against nonprestigious college graduates in South Korea’s rigid educational meritocracy. Through mixed-methods analysis, we found participants transformed experienced discrimination into strategic expertise for identifying biases, while facing psychological costs such as stress and negative reflections on group identity. Notably, red-team participation enhanced their sense of agency and empowerment through their role as guardians of the AI ecosystem. We discuss the implications for designing participatory red-teaming that prioritizes both the ethical treatment and the empowerment of stigmatized groups.
ACM CHI Conference on Human Factors in Computing Systems