Worker Discretion Advised: Co-designing Risk Disclosure in Crowdsourced Responsible AI (RAI) Content Work

要旨

Responsible AI (RAI) content work, such as annotation, moderation, or red teaming for AI safety, often exposes crowd workers to potentially harmful content. While prior work has underscored the importance of communicating well-being risk to employed content moderators, designing effective disclosure mechanisms for crowd workers while balancing worker protection with the needs of task designers and platforms remains largely unexamined. To address this gap, we conducted individual co-design sessions with 15 task designers, 11 crowdworkers, and 3 platform representatives. We investigated task designer preferences for support in disclosing tasks, worker preferences for receiving risk disclosure warnings, and how platform representatives envision their role in shaping risk disclosure practices. We identify design tensions and map the sociotechnical tradeoffs that shape disclosure practices. We contribute design recommendations and feature concepts for risk disclosure mechanisms in the context of RAI content work.

著者
Alice Qian
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Ziqi Yang
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Ryland Shaw
University of Southern California, Los Angeles, California, United States
Jina Suh
Microsoft Research, Redmond, Washington, United States
Laura Dabbish
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Hong Shen
Carnegie Mellon University , Pittsburgh, Pennsylvania, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Co-Design

P1 - Room 129
7 件の発表
2026-04-16 20:15:00
2026-04-16 21:45:00