この勉強会は終了しました。ご参加ありがとうございました。
We introduce ARMath, a mobile Augmented Reality (AR) system that allows ch ildren to discover mathematical concepts in familiar, ord inary objects and engage with math problems in meaningful contexts. Leveraging advanced computer vision, ARMath recognizes everyday objects, visualizes their mathematical attributes, and turns them into tangible or virtual manipulatives. Using the manipulatives, children can solve problems that situate math operations or concepts in specific everyday contexts. Informed by four participatory design sessions with teachers and children, we developed five ARMath modules to support basic arithmetic and 2D geometry. We also conducted an exploratory evaluation of ARMath with 27 children (ages 5-8) at a local children's museum. Our findings demonstrate how ARMath engages children in math learning, how failures in AI can be used as learning opportunities, and challenges that children face when using ARMath.
This paper reports findings from a between-subjects experiment that investigates how different learning content representations in virtual environments (VE) affect the process and outcomes of learning. Seventy-eight participants were subjected to an immersive virtual reality (VR) application, where they received identical instructional information, rendered in three different formats: as text in an overlay interface, as text embedded semantically in a virtual book, or as audio. Learning outcome measures, self-reports, and an electroencephalogram (EEG) were used to compare conditions. Results show that reading was superior to listening for the learning outcomes of retention, self-efficacy, and extraneous attention. Reading text from a virtual book was reported to be less cognitively demanding, compared to reading from an overlay interface. EEG analyses show significantly lower theta and higher alpha activation in the audio condition. The findings provide important considerations for the design of educational VR environments.
Augmented Reality (AR) has become a valuable tool for education and training processes. Meanwhile, cloud-based technologies can foster collaboration and other interaction modalities to enhance learning. We combine the cloud capabilities with AR technologies to present Meta-AR-App, an authoring platform for collaborative AR, which enables authoring between instructors and students. Additionally, we introduce a new application of an established collaboration process, the pull-based development model, to enable sharing and retrieving of AR learning content. We customize this model and create two modalities of interaction for the classroom: local (student to student) and global (instructor to class) pull. Based on observations from our user studies, we organize a four-category classroom model which implements our system: Work, Design, Collaboration, and Technology. Further, our system enables an iterative improvement workflow of the class content and enables synergistic collaboration that empowers students to be active agents in the learning process.
Mobile vocabulary learning interfaces typically present material only in auditory and visual channels, underutilizing the haptic modality. We explored haptic-integrated learning by adding free-form digital annotation to mobile vocabulary learning interfaces. Through a series of pilot studies, we identified three design factors: annotation mode, presentation sequence, and vibrotactile feedback, that influence recall in haptic-integrated vocabulary interfaces. These factors were then evaluated in a within-subject comparative study using a digital flashcard interface as baseline. Results using a 84-item vocabulary showed that the 'whole word' annotation mode is highly effective, yielding a 24.21% increase in immediate recall scores and a 30.36% increase in the 7-day delayed scores. Effects of presentation sequence and vibrotactile feedback were more transient; they affected the results of immediate tests, but not the delayed tests. We discuss the implications of these factors for designing future mobile learning applications.
Machine tasks in workshops or factories are often a compound sequence of local, spatial, and body-coordinated human-machine interactions. Prior works have shown the merits of video-based and augmented reality (AR) tutoring systems for local tasks. However, due to the lack of a bodily representation of the tutor, they are not as effective for spatial and body-coordinated interactions. We propose avatars as an additional tutor representation to the existing AR instructions. In order to understand the design space of tutoring presence for machine tasks, we conduct a comparative study with 32 users. We aim to explore the strengths/limitations of the following four tutor options: video, non-avatar-AR, half-body+AR, and full-body+AR. The results show that users prefer the half-body+AR overall, especially for the spatial interactions. They have a preference for the full-body+AR for the body-coordinated interactions and the non-avatar-AR for the local interactions. We further discuss and summarize design recommendations and insights for future machine task tutoring systems.