LeARning "en VRac"

Paper session

会議の名前
CHI 2020
ARMath: Augmenting Everyday Life with Math Learning
要旨

We introduce ARMath, a mobile Augmented Reality (AR) system that allows ch ildren to discover mathematical concepts in familiar, ord inary objects and engage with math problems in meaningful contexts. Leveraging advanced computer vision, ARMath recognizes everyday objects, visualizes their mathematical attributes, and turns them into tangible or virtual manipulatives. Using the manipulatives, children can solve problems that situate math operations or concepts in specific everyday contexts. Informed by four participatory design sessions with teachers and children, we developed five ARMath modules to support basic arithmetic and 2D geometry. We also conducted an exploratory evaluation of ARMath with 27 children (ages 5-8) at a local children's museum. Our findings demonstrate how ARMath engages children in math learning, how failures in AI can be used as learning opportunities, and challenges that children face when using ARMath.

キーワード
Augmented Reality
Human-AI Interaction
Learning
著者
Seokbin kang
University of Maryland, College Park, MD, USA
Ekta Shokeen
University of Maryland, College Park, MD, USA
Virginia L. Byrne
University of Maryland, College Park, MD, USA
Leyla Norooz
University of Maryland, College Park, MD, USA
Elizabeth Bonsignore
University of Maryland, College Park, MD, USA
Caro Williams-Pierce
University of Maryland, College Park, MD, USA
Jon E. Froehlich
University of Washington, Seattle, WA, USA
DOI

10.1145/3313831.3376252

論文URL

https://doi.org/10.1145/3313831.3376252

動画
Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG
要旨

This paper reports findings from a between-subjects experiment that investigates how different learning content representations in virtual environments (VE) affect the process and outcomes of learning. Seventy-eight participants were subjected to an immersive virtual reality (VR) application, where they received identical instructional information, rendered in three different formats: as text in an overlay interface, as text embedded semantically in a virtual book, or as audio. Learning outcome measures, self-reports, and an electroencephalogram (EEG) were used to compare conditions. Results show that reading was superior to listening for the learning outcomes of retention, self-efficacy, and extraneous attention. Reading text from a virtual book was reported to be less cognitively demanding, compared to reading from an overlay interface. EEG analyses show significantly lower theta and higher alpha activation in the audio condition. The findings provide important considerations for the design of educational VR environments.

キーワード
Virtual reality
Educational Technology
Learning
Cognitive Load
EEG
著者
Sarune Baceviciute
University of Copenhagen, Copenhagen, Denmark
Aske Mottelson
University of Copenhagen, Copenhagen, Denmark
Thomas Terkildsen
University of Copenhagen, Copenhagen, Denmark
Guido Makransky
University of Copenhagen, Copenhagen, Denmark
DOI

10.1145/3313831.3376872

論文URL

https://doi.org/10.1145/3313831.3376872

Meta-AR-App: An Authoring Platform for Collaborative Augmented Reality in STEM Classrooms
要旨

Augmented Reality (AR) has become a valuable tool for education and training processes. Meanwhile, cloud-based technologies can foster collaboration and other interaction modalities to enhance learning. We combine the cloud capabilities with AR technologies to present Meta-AR-App, an authoring platform for collaborative AR, which enables authoring between instructors and students. Additionally, we introduce a new application of an established collaboration process, the pull-based development model, to enable sharing and retrieving of AR learning content. We customize this model and create two modalities of interaction for the classroom: local (student to student) and global (instructor to class) pull. Based on observations from our user studies, we organize a four-category classroom model which implements our system: Work, Design, Collaboration, and Technology. Further, our system enables an iterative improvement workflow of the class content and enables synergistic collaboration that empowers students to be active agents in the learning process.

キーワード
augmented reality
authoring
classroom
collaboration
Git
pull-based model
version control
STEM
electrical circuitry
著者
Ana Villanueva
Purdue University, West Lafayette, IN, USA
Zhengzhe Zhu
Purdue University, West Lafayette, IN, USA
Ziyi Liu
Purdue University, West Lafayette, IN, USA
Kylie Peppler
University of California, Irvine, Irvine, CA, USA
Thomas Redick
Purdue University, West Lafayette, IN, USA
Karthik Ramani
Purdue University, West Lafayette, IN, USA
DOI

10.1145/3313831.3376146

論文URL

https://doi.org/10.1145/3313831.3376146

動画
Learn with Haptics: Improving Vocabulary Recall with Free-form Digital Annotation on Touchscreen Mobiles
要旨

Mobile vocabulary learning interfaces typically present material only in auditory and visual channels, underutilizing the haptic modality. We explored haptic-integrated learning by adding free-form digital annotation to mobile vocabulary learning interfaces. Through a series of pilot studies, we identified three design factors: annotation mode, presentation sequence, and vibrotactile feedback, that influence recall in haptic-integrated vocabulary interfaces. These factors were then evaluated in a within-subject comparative study using a digital flashcard interface as baseline. Results using a 84-item vocabulary showed that the 'whole word' annotation mode is highly effective, yielding a 24.21% increase in immediate recall scores and a 30.36% increase in the 7-day delayed scores. Effects of presentation sequence and vibrotactile feedback were more transient; they affected the results of immediate tests, but not the delayed tests. We discuss the implications of these factors for designing future mobile learning applications.

キーワード
Motoric engagement
Mobile vocabulary learning
Haptics for learning
Multimodal learning
Intersensory reinforced learning
著者
Smitha Sheshadri
National University of Singapore, Singapore, Singapore
Shengdong Zhao
National University of Singapore, Singapore, Singapore
Yang Chen
National University of Singapore & Zhejiang University, Hangzhou, Zhejiang, China
Morten Fjeld
Chalmers University of Technology & University of Bergen, Gothenburg, Sweden
DOI

10.1145/3313831.3376272

論文URL

https://doi.org/10.1145/3313831.3376272

動画
An Exploratory Study of Augmented Reality Presence for Tutoring Machine Tasks
要旨

Machine tasks in workshops or factories are often a compound sequence of local, spatial, and body-coordinated human-machine interactions. Prior works have shown the merits of video-based and augmented reality (AR) tutoring systems for local tasks. However, due to the lack of a bodily representation of the tutor, they are not as effective for spatial and body-coordinated interactions. We propose avatars as an additional tutor representation to the existing AR instructions. In order to understand the design space of tutoring presence for machine tasks, we conduct a comparative study with 32 users. We aim to explore the strengths/limitations of the following four tutor options: video, non-avatar-AR, half-body+AR, and full-body+AR. The results show that users prefer the half-body+AR overall, especially for the spatial interactions. They have a preference for the full-body+AR for the body-coordinated interactions and the non-avatar-AR for the local interactions. We further discuss and summarize design recommendations and insights for future machine task tutoring systems.

キーワード
Machine Task
Avatar Tutor
Tutoring System Design
Exploratory Study
Augmented Reality
著者
Yuanzhi Cao
Purdue University, West Lafayette, IN, USA
Xun Qian
Purdue University, West Lafayette, IN, USA
Tianyi Wang
Purdue University, West Lafayette, IN, USA
Rachel Lee
Purdue University, West Lafayette, IN, USA
Ke Huo
Purdue University, West Lafayette, IN, USA
Karthik Ramani
Purdue University, West Lafayette, IN, USA
DOI

10.1145/3313831.3376688

論文URL

https://doi.org/10.1145/3313831.3376688

動画