As large language models (LLMs) become widespread, students increasingly turn to systems like ChatGPT for writing tasks. Educators worry that this reliance may reduce critical engagement with writing and hinder students' learning processes. Although datasets exist on students’ use of LLMs for writing, how they functionally use ChatGPT in detail---and how this usage shapes their writing and perceptions---remains underexplored. We conducted an online study (n=77) in which students wrote an essay using an in-house ChatGPT we developed to capture their queries. Through qualitative analysis, we identified the types of assistance students sought and presented patterns of use, ranging from asking for opinions on a topic to delegating the entire writing task to ChatGPT. We also found that students' writing self-efficacy influenced their querying patterns and that levels of ownership and creativity varied depending on how they used ChatGPT. This study contributes empirical data to ongoing discussions about how writing education should incorporate or regulate LLM-powered tools.
AI-based chatbots have the potential to accelerate learning and teaching, but may also have counterproductive consequences without thoughtful design and scaffolding. To better understand teachers’ perspectives on large language model (LLM) based chatbots, we conducted a study with 11 teams of middle-school teachers using chatbots for a science and computational thinking activity within a block-based programming environment. Based on a qualitative analysis of audio transcripts and chatbot interactions, we propose three profiles: explorer, frustrated, and mixed that reflect diverse scaffolding needs. In their discussions, we found that teachers perceived chatbot benefits such as building prompting skills and self confidence alongside risks including potential declines in learning and critical thinking. Key design recommendations include scaffolding the introduction to chatbots, facilitating teacher control of chatbot features, and suggesting when and how chatbots should be used. Our contribution informs the design of chatbots to support teachers and learners in middle school coding activities.
Early counting forms a critical foundation for numeracy, involving coordination of visual representations, verbal number words, and physical actions such as pointing. Designing effective technologies for young children, therefore requires careful calibration of multimodal features. This study investigated how different levels of demonstrations paired with a voice assistant—static (baseline: image+voice), animated (animation+voice), and interactive (touch+animation+voice)—influence counting-related understanding and engagement in 2–4-year-olds. We developed a tablet-based counting game and conducted a within-subjects study with 32 children. Results showed that animated demonstration improved cardinal number word understanding over both baseline and the interactive demonstration. Analyses of verbal counting engagement showed that concurrent touch demands increased cognitive load, limiting children’s counting aloud. These findings suggest that more interactivity does not always yield better outcomes for young learners. We contribute empirical evidence and design guidance: voice+animation supports early counting, while touch interactivity should be lightweight and age-appropriate, informing the design of multimodal voice-assisted applications.
This paper investigates how large language models (LLMs) are reshaping competitive programming. The field functions as an intellectual contest within computer science education and is marked by rapid iteration, real-time feedback, transparent solutions, and strict integrity norms. Prior work has evaluated LLMs performance on contest problems, but little is known about how human stakeholders—contestants, problem setters, coaches, and platform stewards—are adapting their workflows and contest norms under LLMs-induced shifts. At the same time, rising AI-assisted misuse and inconsistent governance expose urgent gaps in sustaining fairness and credibility. Drawing on 37 interviews spanning all four roles and a global survey of 207 contestants, as well as an API-based crawl of Codeforces contest logs (2022–2025) for quantitative analysis, we contribute: (i) an empirical account of evolving workflows, (ii) an analysis of contested fairness norms, and (iii) a chess-inspired governance approach with actionable measures—real-time LLMs checks in online contests, peer co-monitoring and reporting, and cross-validation against offline performance—to curb LLMs-assisted misuse while preserving fairness, transparency, and credibility.
Educational programming games (EPGs) build computational thinking (CT), a vital 21st-century skill. A core design challenge is how pedagogical and gameplay challenges are integrated to balance educational objectives with player engagement. This study formalizes two contrasting challenge design patterns that reflect distinct integration strategies: extrinsic programming challenges (C1), a pedagogy-oriented design where programming is the core challenge enforced through external constraints; and intrinsic programming challenges (C2), a gameplay-oriented design where programming serves as a tool for overcoming gameplay challenges raised by in-game puzzles. To examine these challenge design patterns, we developed two isomorphic EPGs WannaBone1 (C1) and WannaBone2 (C2), each featuring 20 levels introducing sequences, loops, conditionals, and global variables. A controlled classroom study with 306 primary school students reveals that both designs improve CT, whereas C2 yields significantly higher intrinsic learning motivation and high-order immersion of flow. These findings indicate that a gameplay-oriented rather than pedagogy-oriented design perspective better unites education and entertainment, guiding future EPGs design.
Teaching assistants (TAs) play a critical role in computing and HCI education, yet little is known about how they perceive and use AI tools or imagine their future pedagogical uses. We report on a series of design workshops with 131 computing (CS) TAs across two U.S. universities. These workshops invited TAs to reflect on current AI use and envision future AI-enhanced tools and practices. Drawing on surveys and design artifacts, we (1) develop a cross-institutional typology of situated TA uses of AI, revealing opportunities and tensions; (2) show how TAs’ visions of AI are shaped by disciplinary norms, institutional structures, and their intermediary position as student-instructors; and (3) reveal ethical dilemmas. Our findings contribute to HCI by positioning TAs as AI-supported knowledge workers in the education domain; illustrating how design and speculation are shaped by people’s situated understandings of AI and their institutional contexts; and identifying a core tension in which TAs simultaneously preserve and erode the human dimensions of their work, with implications for future instructional tools and human–AI collaboration.
Motor-skill learning systems in XR rely on persistent cues. However, constant cueing can induce overreliance and erode memorization and skill transfer. We introduce a skill-adaptive, dynamically transparent ghost instructor whose opacity adapts in real time to learner performance. From a first-person perspective, users observe a ghost hand executing piano fingering with either static or performance-adaptive transparency in a VR piano training. We conducted a within-subjects study (N=30), where learners practiced with traditional Static (fixed-transparency) and our proposed Dynamic (performance-adaptive) modes and were tested without guidance immediately and after a 10-minute retention interval. Relative to Static, the Dynamic mode yielded higher pitch and fingering accuracy and limited error increases. These findings suggest adaptive transparency helps learners internalize fingerings, reducing dependency on external cues and improving short-term skill retention in immersive learning. We discuss design implications for motor-skill learning and outline extensions of this approach to long-term retention and complex tasks.