AI Tutors and Learning Support Systems

会議の名前
CHI 2026
Toward Scalable and Responsible Integration of Course-Specific AI Tutors: Instructor Experiences with a Campus-Wide Platform
要旨

Despite rapid investment in generative AI across higher education, how instructors create, evaluate, and implement course-specific AI tutors remain empirically underexplored, highlighting critical tensions between institutional adoption and instructional practices. Drawing on interviews with 20 instructors, teaching assistants, and instructional designers at a large U.S. research university, we examine how participants engaged with a university-wide platform for creating course-specific AI tutors. Our findings reveal how instructors’ epistemic beliefs and pedagogical orientations shaped their perceptions of appropriate and inappropriate AI uses, as well as how instructional challenges motivated tutor creation across disciplines, class sizes, and course levels. We also identified three key patterns in instructor evaluation of course-specific AI tutors, along with the pedagogical, technical, and ethical implementation challenges they faced. We contribute timely insights to inform research, platform development, and institutional policy toward the responsible and scalable integration of course-specific AI tutors in higher education.

著者
Eunhye Grace Ko
University of Texas at Austin, Austin, Texas, United States
Hakeoung Hannah Lee
The University of Virginia, Charlottesville, Virginia, United States
Anjali Singh
University of Michigan, Ann Arbor, Michigan, United States
Lily Boddy
University of Texas at Austin, Austin, Texas, United States
Kasey Ford
University of Texas at Austin, Austin, Texas, United States
Earl W. Huff
The University of Texas at Austin, Austin, Texas, United States
Will They Try Again? A Large-Scale RCT on Scaffolds that Support Persistence in an Intelligent Tutoring System
要旨

Persistence after failure is critical for learning—but when students make mistakes in intelligent tutoring systems, they often choose not to try again. How can digital platforms encourage students to persist at these moments? We conducted a randomized controlled trial in an intelligent tutoring system for math and science, involving 164,532 students (Grades 8-12) who completed 17 million practice problems. We tested two scalable interventions: a brief persuasive prompt encouraging students to try again, and a visual default nudge that highlighted the retry option. Both interventions increased persistence after failure, and when combined, their effects were additive—suggesting they operate through distinct psychological mechanisms. The nudge had a much larger immediate effect, but the prompt showed proportionally greater spillover to untreated problems. These findings advance theories of persuasive design, demonstrating that implicit, interface-level nudges and explicit motivational prompts can be combined to avoid redundancy while amplifying impact.

受賞
Honorable Mention
著者
Michael W. Asher
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yumou Wei
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Adam Daniel. Reynolds
Siyavula Foundation, Johannesburg, South Africa
Amy Ogan
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Paulo F.. Carvalho
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Novobo: Supporting Teachers' Peer Learning of Instructional Gestures by Teaching a Mentee AI-Agent Together
要旨

Instructional gestures are essential for teaching, enhancing communication and student comprehension. Current training methods for developing these skills can be time-consuming, isolating, or overly prescriptive, e.g., watching lengthy, one-size-fits-all videos. Conversely, research suggests that developing these tacit, experiential skills requires teachers’ peer learning, where they learn from each other and build shared knowledge. While much HCI exploration has applied learning-by-teaching to students’ peer learning, little has explored this approach for teacher professionalization. We present Novobo, an apprentice AI-agent stimulating teachers' peer learning of instructional gestures through verbal and bodily inputs. An evaluation with 30 teachers in 10 collaborative sessions showed Novobo prompted teachers to externalize and share tacit knowledge through dialogue and movement. Teaching an AI mentee together reduced their pressure, facilitating peer exchange and the co-construction of practical knowledge. This work contributes a novel design and empirical insights into how teachable AI-agents can facilitate peer learning in teacher professionalization.

受賞
Best Paper
著者
Jiaqi Jiang
Southern University of Science and Technology, Shenzhen, China
Kexin Huang
Southern University of Science and Technology, shenzhen, China
Huan Zeng
Beijing Normal University, Zhuhai, China
Duo Gong
Southern University of Science and Technology, Shenzhen, China
Roberto Martinez-Maldonado
Monash University, Melbourne, Victoria, Australia
Pengcheng An
Southern University of Science and Technology, Shenzhen, China
From Answer Engines to Learning Partners: A Dual-ZPD Design Framework for AI-Supported Learning
要旨

Generative AI's function as a frictionless "answer engine" creates a paradox in educational HCI: the very tools that can enhance intellect may also weaken it by allowing users to circumvent crucial cognitive processes. This risks creating a "hollowed mind"---knowledge that is broad but superficial, and a user experience that diminishes learner agency. The convenience of cognitive offloading introduces a motivational challenge that traditional cognitive scaffolding cannot address. We argue that designing genuine human-AI partnerships in learning requires moving beyond cognitive support to motivation-aware scaffolding. This paper provides a toolkit for building motivation-aware AI systems. At its core is the Dual Zone of Proximal Development (DZPD), a conceptual framework building on foundational work in educational psychology. We introduce an overarching design principle, concrete design principles, illustrative archetypes, and examples of measurable indicators. These conceptual tools offer essential guidance for the next wave of empirical HCI research in education.

著者
Reinhard Klein
University of Bonn, Bonn, Germany
Daria Benden
University of Bonn, Bonn, Germany
Alexander Schier
Uni Bonn, Bonn, Germany
David Stotko
University of Bonn, Bonn, Germany
Fani Lauermann
University of Bonn, Bonn, Germany
Designing Scaffolding Cards to Facilitate LLM-Based Socratic Instruction: An Exploratory Study of Response Strategies to Support Learning
要旨

The overreliance on large language models (LLMs)-generated answers poses risks to the development of learners’ critical thinking. Socratic instruction, which follows “tutor asks, student answers” approach, could mitigate overreliance by engaging learners with LLM-generated questions rather than passively seeking answers from LLMs. However, learners without effective response strategies often produce superficial answers and therefore undermine Socratic instruction. To bridge the gap, we first conducted a formative study (N=20) to analyze learners’ dialogue logs and interviews, deriving 18 Scaffolding Cards as response strategies to guide learners in framing their answers. A subsequent mixed-methods study (N=34) demonstrated that Scaffolding Cards improved critical thinking, optimized cognitive load allocation, and increased learning satisfaction compared to that without scaffolds. Our work reconfigures scaffolding by incorporating state-aware, agency-preserving, and function-transparent support. We further provide actionable implications for designing responsive and personalized scaffolding to facilitate learner-LLM interaction, introducing innovative perspectives for reclaiming learner agency in LLM-driven education.

受賞
Honorable Mention
著者
Lujin Mao
The Hong Kong Polytechnic University, Hong Kong, China
Linyuan Dong
The Hong Kong Polytechnic University, Hong kong, China
Wenan Li
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Xiangen Hu
University of Memphis, Memphis, Tennessee, United States
Kun-Pyo Lee
The Hong Kong Polytechnic University, Hong Kong, China
Zhibin Zhou
The Hong Kong Polytechnic University, Hong Kong, China
SimStep: Human-in-the-Loop Authoring of Interactive Educational Simulations Through Task-Level Abstractions
要旨

Generative AI enables educators to create interactive learning content by describing goals in natural language. However, without programming affordances such as traceability, refinement, and debugging, teachers struggle to align simulations with learners’ needs, refine them step by step, or verify that they reflect intended learning concepts. We propose a task-level abstraction approach that structures authoring as a sequence of representations, mirroring how teachers plan lessons and providing checkpoints for specification, inspection, and refinement. We instantiate this approach in SimStep, an authoring environment that scaffolds simulation design with four abstractions, including Concept Graph, Scenario Graph, Learning Goal Graph, and UI Graph, and introduces an inverse correction process to revise hidden model assumptions without requiring code manipulation. A technical evaluation shows that these abstractions preserve fidelity across transformations, while a user study with educators demonstrates their effectiveness in authoring simulations. Our work reframes AI-assisted programming as human–AI co-authoring through structured, domain-aligned abstractions.

著者
Zoe Kaputa
University of Washington, Seattle, Washington, United States
Anika Rajaram
The Harker School, San Jose, California, United States
Vryan Feliciano
Stanford University, Stanford, California, United States
Zhuoyue Lyu
University of Cambridge, Cambridge, United Kingdom
Maneesh Agrawala
Stanford University, Stanford, California, United States
Hariharan Subramonyam
Stanford University, Stanford, California, United States
AmIWrite: Exploring Scalable One-on-One Handwriting-Based Tutoring for Mathematical Problem-Solving with an LLM-Powered AI Tutor
要旨

Real-time handwriting interactions between tutors and students —where tutors observe individual problem-solving processes, provide personalized annotations, and adapt explanations based on students' work—are fundamental to effective STEM tutoring. However, scaling such personalized handwriting-based tutoring remains challenging—human tutors cannot be available to every student on demand, and current online platforms often fail to recreate equivalent learning experiences. As an initial step toward tackling this challenge, we present AmIWrite, an LLM-powered AI tutoring system for mathematical problem-solving that provides real-time co-speech handwriting interactions on tablet devices, instantiated here as a case study in linear algebra. We conducted a within-subjects study (N = 40) comparing AmIWrite to a text-based AI tutor on two linear algebra topics. Our case study demonstrates how a multimodal AI tutor can preserve the pedagogical benefits of handwriting-based math tutoring and offer a potential path toward more scalable one-on-one STEM tutoring.

著者
Ziyi Liu
Purdue University, West Lafayette, Indiana, United States
Yuzhao Chen
Purdue University, West Lafayette, Indiana, United States
Haoyu Ji
purdue university, West Lafayette, Indiana, United States
Runlin Duan
Purdue University, West Lafayette, Indiana, United States
Zhengzhe Zhu
Purdue University, West Lafayette, Indiana, United States
Xiyun Hu
Purdue University, West Lafayette , Indiana, United States
Kylie Peppler
University of California - Irvine, Irvine, California, United States
Karthik Ramani
Purdue University, West Lafayette, Indiana, United States