Dark and Bright Side of Participatory Red-Teaming with Targets of Stereotyping for Eliciting Harmful Behaviors from Large Language Models

要旨

Red-teaming—where adversarial prompts are crafted to expose harmful behaviors and assess risks—offers a dynamic approach to surfacing underlying stereotypical bias in large language models. Because such subtle harms are best recognized by those with lived experience, involving targets of stereotyping as red-teamers is essential. However, critical challenges remain in leveraging their lived experience for red-teaming while safeguarding psychological well-being. We conducted an empirical study of participatory red-teaming with 20 individuals stigmatized by stereotypes against nonprestigious college graduates in South Korea’s rigid educational meritocracy. Through mixed-methods analysis, we found participants transformed experienced discrimination into strategic expertise for identifying biases, while facing psychological costs such as stress and negative reflections on group identity. Notably, red-team participation enhanced their sense of agency and empowerment through their role as guardians of the AI ecosystem. We discuss the implications for designing participatory red-teaming that prioritizes both the ethical treatment and the empowerment of stigmatized groups.

受賞
Honorable Mention
著者
Sieun Kim
KAIST, Daejeon, Korea, Republic of
Yeeun Jo
Keimyung University, Daegu, Korea, Republic of
Sungmin Na
KAIST, Dajeon, Korea, Republic of
Hyunseung Lim
KAIST, Daejeon, Korea, Republic of
Eunchae Lee
Korea Advanced Institute of Science and Technology, Daejeon, Korea, Republic of
Yu Min Choi
KAIST, Daejeon, Korea, Republic of
Soohyun Cho
Keimyung University, Daegu, Korea, Republic of
Hwajung Hong
KAIST, Deajeon, Korea, Republic of

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Social Media Discourse and Online Harms

P1 - Room 119
7 件の発表
2026-04-17 20:15:00
2026-04-17 21:45:00