Who Gets to Define Safety? A Systematic Review of How Generative AI Research Addresses Youth Online Safety

要旨

Generative AI is rapidly reshaping young people’s digital experiences, from providing emotional support to introducing new dimensions of risks. Yet, existing safety frameworks are not equipped to handle the unique risks posed by GenAI. To investigate how youth safety is being addressed in this new landscape, we conducted a systematic review of (N=30) GenAI-youth studies from 2014-2025. We found that GenAI-youth-related research was primarily led by AI experts with minimal involvement from youth development experts or young people themselves. Safety was typically framed as a technical system feature, optimized through filters, benchmarks, or guardrails, rather than a relational, contextual, and developmentally grounded concern. We call on the HCI community to re-evaluate its approach to participation in AI. We must move beyond reactive, system-driven GenAI approaches to youth safety towards a more holistic, proactive model where multistakeholder inclusion is a core aspect throughout the AI-lifecycle, leading to safer and equitable systems. Content Warning: This paper discusses sensitive topics, such as self-harm, which may be triggering.

著者
Ozioma Collins. Oguine
University of Notre Dame, Notre Dame, Indiana, United States
Adriana Alvarado Garcia
IBM Research, Yorktown Heights, New York, United States
Michael Muller
Independent Researcher, Medford, Massachusetts, United States
Karla Badillo-Urquiola
University of Notre Dame, South Bend, Indiana, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Teenagers & Technology

P1 - Room 121
7 件の発表
2026-04-15 18:00:00
2026-04-15 19:30:00