“I Don’t Think RAI Applies to My Model” – Engaging Non-champions with Sticky Stories for Responsible AI Work

要旨

Responsible AI (RAI) tools—checklists, templates, and governance processes—often engage RAI champions, individuals intrinsically motivated to advocate ethical practices, but fail to reach non-champions, who frequently dismiss them as bureaucratic tasks. To explore this gap, we shadowed meetings and interviewed data scientists at an organization, finding that practitioners perceived RAI as irrelevant to their work. Building on these insights and theoretical foundations, we derived design principles for engaging non-champions, and introduced sticky stories—narratives of unexpected ML harms designed to be concrete, severe, surprising, diverse, and relevant, unlike widely circulated media to which practitioners are desensitized. Using a compound AI system, we generated and evaluated sticky stories through human and LLM assessments at scale, confirming they embodied the intended qualities. In a study with 29 practitioners, we found that, compared to regular stories, sticky stories significantly increased time spent on harm identification, broadened the range of harms recognized, and fostered deeper reflection.

受賞
Best Paper
著者
Nadia Nahar
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Chenyang Yang
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yanxin Chen
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Wesley Hanwen. Deng
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Ken Holstein
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Motahhare Eslami
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Christian Kästner
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Social Impact and Responsible Tech

P1 - Room 120
7 件の発表
2026-04-16 20:15:00
2026-04-16 21:45:00