Queer emerging adults (EAs) increasingly seek mental health support through digital technologies. While HCI has studied queer users’ experiences online, less is known about how queer EAs use social media everyday to support their mental health. Therefore, we conducted a cross-sectional survey of queer EAs (ages 18-24, \textit{N} = 313) in the United States, combining latent profile analysis and qualitative analysis. We examined how queer EAs employ various coping strategies to support their mental health. Our latent profile analysis revealed three distinct engagement profiles that combined support strategies differently. Our qualitative study examines how social media supported and hindered participants’ coping goals. We contribute to HCI by (1) highlighting how queer EAs curate their coping strategies and social media platforms, (2) offering the perspective of digital coping ecologies, and (3) offering design considerations for supporting queer EAs’ digital coping ecologies.
Teenagers are avid users of Discord, a fast-growing platform for synchronous communication where they often interact with strangers. Because Discord combines private DMs, semi-private voice channels, and public servers in one place, it creates a hybrid environment that can produce complex—and underexplored—safety risks for teenagers. Drawing on 16 interviews with teenage Discord users, this study examines their strategies for navigating risky social interactions in the platform. Our findings reveal that when teenagers encounter risks during social interactions, they exercise vigilance by evaluating suspicious interactions before forming friendships, using safety tools, and engaging in controlled risk-taking to safeguard their privacy and security. At the community level, they mitigate risks through selective participation in servers, a practice supported by vigilant governance structures. We discuss how vigilance enables teenagers to act during risky encounters to protect themselves, advancing understanding of teenagers’ agency in risk navigation and informing teen-centered designs for safer online environments.
Generative AI is rapidly reshaping young people’s digital experiences, from providing emotional support to introducing new dimensions of risks. Yet, existing safety frameworks are not equipped to handle the unique risks posed by GenAI. To investigate how youth safety is being addressed in this new landscape, we conducted a systematic review of (N=30) GenAI-youth studies from 2014-2025. We found that GenAI-youth-related research was primarily led by AI experts with minimal involvement from youth development experts or young people themselves. Safety was typically framed as a technical system feature, optimized through filters, benchmarks, or guardrails, rather than a relational, contextual, and developmentally grounded concern. We call on the HCI community to re-evaluate its approach to participation in AI. We must move beyond reactive, system-driven GenAI approaches to youth safety towards a more holistic, proactive model where multistakeholder inclusion is a core aspect throughout the AI-lifecycle, leading to safer and equitable systems. Content Warning: This paper discusses sensitive topics, such as self-harm, which may be triggering.
The social media app BeReal positions itself as a space for meaningful connections, however, little is known about how the app’s unique combination of ephemerality, informality, and improvisation actually supports relationship maintenance. We aimed to understand what role BeReal plays in young adults’ friendships and lives. Drawing on interviews with 31 young adults at a large university in the northeastern U.S., we find that users treated BeReal as a fun, low-effort space to share glimpses of everyday life with smaller networks of friends. BeReal helped users maintain relationships, especially with past or geographically distant friends but not necessarily deepen bonds with close friends. Users welcomed the app’s minimalistic user experience but raised doubts about the platform’s longevity. Based on our findings, we present the Social Media Effort (SME) heuristic to help designers and researchers visualize how content and audience shape the social media ecosystem. We advocate that the HCI community design new platforms, since dominant business models are not poised to support relationship maintenance.
Generative AI chatbots like ChatGPT are increasingly embedded in teens’ everyday routines—not just for academic support, but emotional expression, identity rehearsal, and social interaction. Drawing on interviews with 20 U.S. teens aged 13–18, this paper examines how chatbots are used across academic, emotional, and social domains. While teens often frame these interactions as impersonal or functional, their accounts reveal nuanced forms of self-presentation, tone management, and impression shaping. We introduce the concept of ambient trust to describe how repeated interactions foster a sense of alignment with AI systems—even without deep emotional reliance. We contribute: (1) a thematic account of teens’ cross-domain chatbot practices; (2) a theoretical synthesis drawing on boundary regulation, image management, and related theories of algorithmic mediation; (3) the concept of ambient trust, which helps explain how instrumental use can quietly shape self-expression; and (4) design implications for developmentally appropriate, transparent AI systems.
The platformisation of news is increasingly shaping young adults’ emotional wellbeing, presenting urgent challenges for HCI. Existing approaches prioritise control, visibility, and agency, neglecting the emotional and relational dimensions of everyday news encounters. Such frameworks tend to overlook how information encounters contribute to emotional strain, affective overload, and the need for self-care. In this study, we adopted a qualitative, context-sensitive methodology to explore how young adults engage with news in their daily lives, foregrounding the emotional experiences that accompany these interactions. Our findings reveal that information encounters are deeply entangled with emotional needs such as self-expression, self-preservation, care for others, and a relational dependence on personalisation algorithms. Our insights call for a reorientation toward emotionally aware, harm-reducing design that supports emotional resilience, fosters empathetic engagement, and promotes self-care in information encounters. This work contributes to ongoing conversations in HCI around affective computing and the ethics of personalisation in socio-technical systems.
Social media platforms are deeply embedded in teenagers’ daily lives, shaping their identities, relationships, and leisure time while introducing risks such as social pressure, harmful content, and addiction. While attention capture mechanisms and dark patterns are increasingly recognized as contributors to the harm these platforms perpetuate, teenagers’ own experiences of harm remain underexplored. In this study, we report on analysis of eight interviews with participants aged 12--17, revealing how their desire to be a "normal teen'' shapes their lives, how they experience and interpret harms, and how ecologies of use influence mitigation strategies. Our findings reveal that teenagers frequently attribute responsibility to themselves or other teens rather than the designed affordances of the platform. We contribute a detailed account of potential behavioral and attentional harms that further situates "what counts as harm'' within contemporary technology governance debates, emphasizing the need for design alternatives that balance safety, agency, and meaningful engagement.