Children increasingly interact with generative AI systems that can produce hallucinated content, potentially reinforcing misconceptions and undermining critical thinking skills. We investigate how children detect and respond to hallucinations while building and testing LLM-powered chatbots in a development environment. We integrated hallucination-awareness scaffolds such as confidence indicators, fact-checking, repeated questioning, and model comparison. Through a study with 48 middle school learners aged 10-14, participants showed significant pre-to-post gains in AI knowledge, hallucination awareness, and confidence in building trustworthy chatbots. They developed multi-layered strategies, including probing inconsistencies and cross-checking with external sources. Key challenges included over-reliance on visible cues, fragmented use of scaffolds, and a tension between creativity and reliability. These findings highlight design implications for children’s AI literacy for responsible AI development: supporting proactive, iterative engagement in the development cycle, integrating scaffolds into coherent workflows, and balancing creativity with accuracy.
ACM CHI Conference on Human Factors in Computing Systems