この勉強会は終了しました。ご参加ありがとうございました。
Scientists have long sought to engage public audiences in research through citizen science such as biological surveys or distributed data collection. Recent online platforms have expanded the scope of what people-powered research can mean. Science museums are unique cultural institutions that translate scientific discovery for public audiences, while conducting research of their own. This makes museums compelling sites for engaging audiences directly in scientific research, but there are associated challenges as well. This project engages public audiences in contributing to real research as part of their visit to a museum. We present the design and evaluation of U!Scientist, an interactive multi-person tabletop exhibit based on the online Zooniverse project, Galaxy Zoo. We installed U!Scientist in a planetarium and collected video, computer logs, naturalistic observations, and surveys with visitors. Our findings demonstrate the potential of exhibits to engage new audiences in collaborative scientific discussions as part of people-powered research.
Second language (L2) English learners often find it difficult to improve their pronunciations due to the lack of expressive and personalized corrective feedback. In this paper, we present Pronunciation Teacher—PTeacher, a Computer-Aided Pronunciation Training (CAPT) system that provides personalized exaggerated audio-visual corrective feedback for mispronunciations. Though the effectiveness of exaggerated feedback has been demonstrated, it is still unclear how to define the appropriate degrees of exaggeration when interacting with individual learners. To fill in this gap, we interview 100 L2 English learners and 22 professional native teachers to understand their needs and experiences. Three critical metrics are proposed for both learners and teachers to identify the best exaggeration levels in both audio and visual modalities. Additionally, we incorporate the personalized dynamic feedback mechanism given the English proficiency of learners. Based on the obtained insights, a comprehensive interactive pronunciation training course is designed to help L2 learners rectify mispronunciations in a more perceptible, understandable, and discriminative manner. Extensive user studies demonstrate that our system significantly promotes the learners' learning efficiency.
Previous research has demonstrated the benefits of applying comparative strategies while learning from informational texts, where students identify key concepts and then attempt to establish relationships between those concepts. Concept mapping is one activity that can prompt students to use comparative strategies, but not all students benefit from this activity without support. This work presents an intelligent tutoring system for concept mapping that facilitates the development of comparative strategies through diagnostic feedback that responds to the quality of students' concept mapping process and map correctness. The novelty of the system lies in the combination of outcome-based feedback methods typical in concept mapping with adaptive process-based evaluation. In a lab study with 46 college students, we evaluate the effect of this combined adaptive support compared to solely process-based support and no support. Results suggested that the combined feedback approach showed promise at improving students' use of comparative strategies and learning outcomes.
Applications of generative models such as Generative Adversarial Networks (GANs) have made their way to social media platforms that children frequently interact with. While GANs are associated with ethical implications pertaining to children, such as the generation of Deepfakes, there are negligible efforts to educate middle school children about generative AI. In this work, we present a generative models learning trajectory (LT), educational materials, and interactive activities for young learners with a focus on GANs, creation and application of machine-generated media, and its ethical implications. The activities were deployed in four online workshops with 72 students (grades 5-9). We found that these materials enabled children to gain an understanding of what generative models are, their technical components and potential applications, and benefits and harms, while reflecting on their ethical implications. Learning from our findings, we propose an improved learning trajectory for complex socio-technical systems.
Shadowing, i.e., listening to recorded native speech and simultaneously vocalizing the words, is a popular language-learning technique that is known to improve listening skills. However, despite strong evidence for its efficacy as a listening exercise, existing shadowing systems do not adequately support listening-focused practice, especially in self-regulated learning environments with no external feedback. To bridge this gap, we introduce CAST, a shadowing system that makes self-regulation easy and effective through four novel design elements -- in-the-moment highlights for tracking and visualizing progress, contextual blurring for inducing self-reflection on misheard words, self-listening comparators for post-practice self-evaluation, and adjustable pause-handles for self-paced practice. We base CAST on a formative user study (N=15) that provides fresh empirical grounds on the needs and challenges of shadowers. We validate our design through a summative evaluation (N=12) that shows learners can successfully self-regulate their shadowing practice with CAST while retaining focus on listening.
Students in computerized learning environments often direct their own learning processes, which requires metacognitive awareness of what should be learned next. We investigated a novel method of measuring verbalized metacognition by applying natural language processing (NLP) to transcripts of interviews conducted in a classroom with 99 middle school students who were using a computerized learning environment. We iteratively adapted the NLP method for the linguistic characteristics of these interviews, then applied it to study three research questions regarding the relationships between verbalized metacognition and measures of 1) learning, 2) confusion, and 3) metacognitive problem-solving strategies. Verbalized metacognition was not directly related to learning, but was related to confusion and metacognitive problem-solving strategies. Results also suggested that interviews themselves may improve learning by encouraging metacognition. We discuss implications for designing computerized environments that support self-regulated learning through metacognition.
Deployment of AI assessment tools in education is widespread, but work on students' interactions and attitudes towards imperfect autograders is comparatively lacking. This paper presents students' perceptions surrounding a \url{~}90\% accurate automated short-answer grader that determined homework and exam credit in a college-level computer science course. Using surveys and interviews, we investigated students' knowledge about the autograder and their attitudes.
We observed that misalignment between folk theories about how the autograder worked and how it actually worked could lead to suboptimal answer construction strategies. Students overestimated the autograder's probability of marking correct answers as wrong, and estimates of this probability were associated with dissatisfaction and perceptions of unfairness. Many participants expressed a need for additional instruction on how to cater to the autograder. From these findings, we propose guidelines for incorporating imperfect short answer autograders into classroom in a manner that is considerate of students' needs.
Online exams have become widely used to evaluate students’ performance in mastering knowledge in recent years, especially during the pandemic of COVID-19. However, it is challenging to conduct proctoring for online exams due to the lack of face-to-face interaction. Also, prior research has shown that online exams are more vulnerable to various cheating behaviors, which can damage their credibility. This paper presents a novel visual analytics approach to facilitate the proctoring of online exams by analyzing the exam video records and mouse movement data of each student. Specifically, we detect and visualize suspected head and mouse movements of students in three levels of detail, which provides course instructors and teachers with convenient, efficient and reliable proctoring for online exams. Our extensive evaluations, including usage scenarios, a carefully-designed user study and expert interviews, demonstrate the effectiveness and usability of our approach.
Techniques from Natural-Language-Processing offer the opportunities to design new dialog-based forms of human-computer interaction as well as to analyze the argumentation quality of texts. This can be leveraged to provide students with adaptive tutoring when doing a persuasive writing exercise. To test if individual tutoring for students' argumentation will help them to write more convincing texts, we developed ArgueTutor, a conversational agent that tutors students with adaptive argumentation feedback in their learning journey. We compared ArgueTutor with 55 students to a traditional writing tool. We found students using ArgueTutor wrote more convincing texts with a better quality of argumentation compared to the ones using the alternative approach. The measured level of enjoyment and ease of use provides promising results to use our tool in traditional learning settings. Our results indicate that dialog-based learning applications combined with NLP text feedback have a beneficial use to foster better writing skills of students.
Adaptive Collaborative Learning Support (ACLS) systems improve collaboration and learning for students over individual work or collaboration with non-adaptive support. However, many ACLS systems are ill-suited for rural contexts where students often need multiple kinds of support to complete tasks, may speak languages unsupported by the system, and require more than pre-assigned tutor-tutee student pairs for more equitable learning. We designed an intervention that fosters more equitable help-seeking by automatically detecting student struggles and prompts them to seek help from specific peers that can help. We conducted a mixed-methods experimental study with 98 K-3 students in a rural village in Tanzania over a one-month period, evaluating how the system affects student interactions, system engagement, and student learning. Our intervention increased student interactions by almost 4 times compared to the control condition, increased domain knowledge interactions, and propelled students to engage in more cognitively challenging activities.
Previous studies have highlighted the benefits of pedagogical conversational agents using socially-oriented conversation with students. In this work, we examine the effects of a conversational agent's use of affiliative and self-defeating humour --- considered conducive to social well-being and enhancing interpersonal relationships --- on learners' perception of the agent and attitudes towards the task. Using a between-subjects protocol, 58 participants taught a conversational agent about rock classification using a learning-by-teaching platform, the Curiosity Notebook. While all agents were curious and enthusiastic, the style of humour was manipulated such that the agent either expressed an affiliative style, a self-defeating style, or no humour. Results demonstrate that affiliative humour can significantly increase motivation and effort, while self-defeating humour, although enhancing effort, negatively impacts enjoyment. Findings further highlight the importance of understanding learner characteristics when using humour.