Generative AI is rapidly reshaping young people’s digital experiences, from providing emotional support to introducing new dimensions of risks. Yet, existing safety frameworks are not equipped to handle the unique risks posed by GenAI. To investigate how youth safety is being addressed in this new landscape, we conducted a systematic review of (N=30) GenAI-youth studies from 2014-2025. We found that GenAI-youth-related research was primarily led by AI experts with minimal involvement from youth development experts or young people themselves. Safety was typically framed as a technical system feature, optimized through filters, benchmarks, or guardrails, rather than a relational, contextual, and developmentally grounded concern. We call on the HCI community to re-evaluate its approach to participation in AI. We must move beyond reactive, system-driven GenAI approaches to youth safety towards a more holistic, proactive model where multistakeholder inclusion is a core aspect throughout the AI-lifecycle, leading to safer and equitable systems. Content Warning: This paper discusses sensitive topics, such as self-harm, which may be triggering.
ACM CHI Conference on Human Factors in Computing Systems