AI Sensing and Intervention in Higher Education: Student Perceptions of Learning Impacts, Affective Responses, and Ethical Priorities
説明

AI technologies that sense student attention and emotions to enable more personalised teaching interventions are increasingly promoted, but raise pressing questions about student learning, wellbeing, and ethics. In particular, students’ perspectives about AI sensing-intervention in learning are often overlooked. We conducted an online mixed-method experiment with Australian university students (N=132), presenting video scenarios varying by whether sensing was used (in-use vs. not-in-use), sensing modality (gaze-based attention detection vs. facial-based emotion detection), and intervention (by digital device vs. teacher). Participants also completed pairwise ranking tasks to prioritise six core ethical concerns. Findings revealed that students valued targeted intervention but responded negatively to AI monitoring, regardless of sensing methods. Students preferred system-generated hints over teacher-initiated assistance, citing learning agency and social embarrassment concerns. Students’ ethical considerations prioritised autonomy and privacy, followed by transparency, accuracy, fairness, and learning beneficence. We advocate designing customisable, social-sensitive, non-intrusive systems that preserve student control, agency, and well-being.

日本語まとめ
読み込み中…
読み込み中…
Supporting Holistic AI Ethics Literacy Education Through Critical Reflection: Three Recommendations for Fostering Children’s Ethical Growth
説明

With childhood increasingly mediated by AI and marked by children's heightened vulnerabilities, critical reflection emerges as a vital tool for both understanding and strengthening children's ethical reasoning, steering them between uncritical adoption and blanket pessimism about AI. As such, we present outcomes of a study with 66 children (aged 10-11) wherein we trace what children attend to and how they perceive AI ethics. Aligned with UNESCO's ethical principles for AI, we utilised 10 design fiction scenarios set in familiar contexts to prompt reflection. Mixed-methods data showed that children's perceptions skewed towards caution; ethical concerns were also distributed unevenly across principles, indicating where AI ethics literacy may need targeted scaffolding. This work contributes to HCI by highlighting the complexity of children's perceptions and showing how speculative, reflection-based methods can shift children's ethical considerations about AI, with three recommendations for AI ethics literacy education that the HCI community should consider in future.

日本語まとめ
読み込み中…
読み込み中…
Supporting Learners' Use of Imperfect Generative Pedagogical Chatbots: The Role of Chatbot Response Uncertainty and Reduced Verbosity
説明

Generative chatbots promise to scale personalized learning. Most publicly available generative chatbots are designed to provide confident and eloquent responses by default, even when hallucinating. Prior work has observed that learners using such chatbots often engage shallowly and fail to detect chatbot errors due to overtrust, cognitive overload, and prioritization of short-term gains. To address these challenges, this work examines two chatbot design options in a STEM learning context: introducing verbal uncertainty and reducing response verbosity. Using Bayesian causal inference and thematic analysis in a quasi-experimental setting, we found that a less verbose chatbot improved detection of errors with logical fallacies, but did not increase the use of alternative resources. A chatbot that always expressed uncertainty reduced the adoption of incorrect chatbot responses, but had mixed effects on learning outcomes, suggesting the need to increase signal credibility and maintain learners’ engagement in the learning process despite chatbot disuse.

日本語まとめ
読み込み中…
読み込み中…
Exploring the Design and Impact of Interactive Worked Examples for Learners with Varying Prior Knowledge
説明

Tutoring systems improve learning through tailored interventions, such as worked examples, but often suffer from the aptitude-treatment interaction effect where low prior knowledge learners benefit more. We applied the ICAP learning theory to design two new types of worked examples, Buggy (students fix bugs), and Guided (students complete missing rules), requiring varying levels of cognitive engagement, and investigated their impact on learning in a controlled experiment with 155 undergraduate students in a logic problem solving tutor. Students in the Buggy and Guided examples groups performed significantly better on the posttest than those receiving passive worked examples. Buggy problems helped high prior knowledge learners whereas Guided problems helped low prior knowledge learners. Behavior analysis showed that Buggy produced more exploration-revision cycles, while Guided led to more help-seeking and fewer errors. This research contributes to the design of interventions in logic problem solving for varied levels of learner knowledge and a novel application of behavior analysis to compare learner interactions with the tutor.

日本語まとめ
読み込み中…
読み込み中…
AI meets Mathematics Education: Supporting Instructors in Large Mathematics Classes with Context-Aware AI
説明

Large-enrollment university courses face persistent challenges in providing timely and scalable instructional support. While generative AI holds promise, its effective use depends on reliability and pedagogical alignment. We present a human-centered case study of AI-assisted support in a Calculus I course, implemented in close collaboration with the course instructor. We developed a system to answer students’ questions on a discussion forum, fine-tuning a lightweight language model on 2,588 historical student–instructor interactions. The model achieved 75.3% accuracy on a benchmark of 150 representative questions annotated by five instructors, and in 36% of cases, its responses were rated equal to or better than instructor answers. Post-deployment student survey (N = 105) indicated that students valued the alignment of the responses with the course materials and their immediate availability, while still relying on the instructor verification for trust. We highlight the importance of hybrid human–AI workflows for safe and effective course support.

日本語まとめ
読み込み中…
読み込み中…
"Bespoke Bots'': Diverse Instructor Needs for Customizing Generative AI Classroom Chatbots
説明

Instructors are increasingly experimenting with AI chatbots for classroom support. To investigate how instructors adapt chatbots to their own contexts, we first analyzed existing resources that provide prompts for educational purposes. We identified ten common categories of customization, such as persona, guardrails, and personalization. We then conducted interviews with ten university STEM instructors and asked them to card-sort the categories into priorities. We found that instructors consistently prioritized the ability to customize chatbot behavior to align with course materials and pedagogical strategies and de-prioritized customizing persona/tone. However, their prioritization of other categories varied significantly by course size, discipline, and teaching style, even across courses taught by the same individual, highlighting that no single design can meet all contexts. These findings suggest that modular AI chatbots may provide a promising path forward. We offer design implications for educational developers building the next generation of customizable classroom AI systems.

日本語まとめ
読み込み中…
読み込み中…
Instructional Mechanisms for Professional Writing: A Comparison of Scaffolded Annotation and ChatGPT
説明

Professional writing skills are essential for crafting job application materials where applicants showcase their qualifications to recruiters and employers. Lettersmith is a digital tool that supports writing through scaffolded annotation, an instructional approach combining an expert-informed checklist, annotated examples, and self-tagging. We evaluated the efficacy of the instructional mechanisms that make up scaffolded annotation, as well as the use of ChatGPT, in facilitating writing cognitive processes and writing quality. Through a lab experiment with 146 first-year college students writing and revising a cover letter, we found that the combined mechanisms of scaffolded annotation within Lettersmith promoted a stronger understanding of the writing genre. Specifically, the use of a checklist combined with another writing support, like an example or self-tagging, was particularly effective for improving writing quality. Unstructured use of ChatGPT did not improve writing cognitive processes or writing quality more than Lettersmith.

日本語まとめ
読み込み中…
読み込み中…