With the growing prevalence of online mis/disinformation encountered by children, digital media literacy has become an urgent concern. Existing research emphasizes cognitive models, focusing on individual reasoning and specific quantitative criteria to classify people’s information literacy. However, critics argue that focusing solely on cognitive approach neglects the social, emotional, and cultural contexts that shape how mis/disinformation is created and spread. In this study, we expand beyond the cognitive model by examining socioemotional learning (SEL) and sociocultural (SC) perspectives. To explore how children conceptualize mis/disinformation through these lenses, we conducted 26 co-design workshops with children ages 6–11 over a 2.5-year period. Our findings highlight children’s awareness of emotional responses, peer pressure, financial incentives, and the importance of community support. These insights contribute to HCI by foregrounding the need for design approaches that integrate cognitive, SEL, and SC dimensions. We present an integrated framework to inform how community groups can support children and design recommendations that address the growing sophistication of mis/disinformation.
As people age, sleep often becomes lighter, more fragmented, and a source of increasing concern. Smart rings, like Oura, offer a discreet and comfortable means of supporting sleep tracking, yet it remains unclear how older adults engage with the sleep-related insights they provide. Our research investigates how older adults engage with wearable-derived physiological and behavioural sleep data, the barriers they encounter in understanding health metrics, and the ways these technologies influence self-perception and wellbeing practices. We report findings from a one-month diary study (n=20) and follow-up interviews (n=10) after around four months of ring use. Participants reflected on the meanings they attributed to app-based metrics, and whether such feedback felt useful, confusing, or intrusive, revealing misalignments with youthful defaults that negatively impacted engagement. We explore this in terms of "age friction" and discuss opportunities for more age-inclusive wearable technologies that promote meaningful engagement with personal health and wellbeing data.
Generative AI is increasingly present in children’s learning environments, yet little is known about how families navigate this technology in middle childhood (ages 7–13), when parental guidance remains strong but children seek independence. \rev{Drawing on self-directed learning (SDL), we explore how parents in our exploratory sample perceived children’s emerging self-directness and agency.} Through focus groups with 13 parent–child pairs, we examine parents’ views on children’s AI literacy development, readiness factors, and mediation strategies. Parents described emergent pathways shaped by screen time, self-directness, and knowledge growth. They often confined AI to learning-only contexts, positioning it as a tutor while overlooking non-learning uses and risks such as privacy and infrastructural embedding. Many acknowledged limited AI literacy and turned to joint engagement as opportunities for co-learning. Our findings surface possible parental pathways of children’s AI literacy, highlight gaps between pragmatic expectations and critical literacies, and offer situated design considerations for AI systems that scaffold SDL while balancing oversight with autonomy.
Understanding how to design and implement equity-based approaches to technology-rich learning in community settings can lead to increased and diversified participation in computing. However, research has shown that making practices can be inequitable, particularly for populations who are situated in low-resourced settings. In US cities, recreation centers have been shown to be promising sites for equity-based hands-on maker learning. However, it is unclear what approaches are needed to create the necessary technical and social infrastructure at these sites to support community uptake. Our study investigates the infrastructure development process for an equity-based makerspace program as developed within city-run community recreation centers in two US cities over 3 years. We developed an infrastructure map depicting the ecosystem of multiple organizations that are involved in creating the program and identified how digital technologies within makerspaces function as sociocultural factors within this ecosystem.
Children increasingly interact with generative AI systems that can produce hallucinated content, potentially reinforcing misconceptions and undermining critical thinking skills. We investigate how children detect and respond to hallucinations while building and testing LLM-powered chatbots in a development environment. We integrated hallucination-awareness scaffolds such as confidence indicators, fact-checking, repeated questioning, and model comparison. Through a study with 48 middle school learners aged 10-14, participants showed significant pre-to-post gains in AI knowledge, hallucination awareness, and confidence in building trustworthy chatbots. They developed multi-layered strategies, including probing inconsistencies and cross-checking with external sources. Key challenges included over-reliance on visible cues, fragmented use of scaffolds, and a tension between creativity and reliability. These findings highlight design implications for children’s AI literacy for responsible AI development: supporting proactive, iterative engagement in the development cycle, integrating scaffolds into coherent workflows, and balancing creativity with accuracy.
Developing the ability to think critically about AI and interpret its outputs requires an understanding of AI bias, a key skill for both AI users and future developers. While some initiatives have introduced teens to algorithmic bias, few have engaged them in actively identifying and quantifying bias in real-world generative AI systems. This paper presents BiasViz, an interactive tool that leverages project-based and narrative-centered learning to help middle school students (11-14 year old) analyze AI bias in large language models. We conducted a study of 28 students’ interactions with BiasViz to evaluate its efficacy in fostering critical thinking about AI bias. Our findings suggest that BiasViz successfully introduced most students to AI bias, and some used the tool to explore personally relevant biases. We identify opportunities for the tool’s iteration and associated curriculum to promote learning and share insights for designing learning environments that foster youth’s critical thinking about AI.
As generative AI (genAI) rapidly enters classrooms, accompanied by district-level policy rollouts and industry-led teacher trainings, it is important to rethink the canonical “adopt and train” playbook. Decades of educational technology research show that tools promising personalization and access often deepen inequities due to uneven resources, training, and institutional support. Against this backdrop, we conducted semi-structured interviews with 22 teachers from a large U.S. school district that was an early adopter of genAI. Our findings reveal the motivations driving adoption, the factors underlying resistance, and the boundaries teachers negotiate to align genAI use with their values. We further contribute by unpacking the sociotechnical dynamics---including district policies, professional norms, and relational commitments---that shape how teachers navigate the promises and risks of these tools.