Collaboration & learning in new realities

Paper session

会議の名前
CHI 2020
Augmented Reality Training for Industrial Assembly Work - Are Projection-based AR Assistive Systems an Appropriate Tool for Assembly Training?
要旨

Augmented Reality (AR) systems are on their way to industrial application, e.g. projection-based AR is used to enhance assembly work. Previous studies showed advantages of the systems in permanent-use scenarios, such as faster assembly times. In this paper, we investigate whether such systems are suitable for training purposes. Within an experiment, we observed the training with a projection-based AR system over multiple sessions and compared it with a personal training and a paper manual training. Our study shows that projection-based AR systems offer only small benefits in the training scenario. While a systematic mislearning of content is prevented through immediate feedback, our results show that the AR training does not reach the personal training in terms of speed and recall precision after 24 hours. Furthermore, we show that once an assembly task is properly trained, there are no differences in the long-term recall precision, regardless of the training method.

受賞
Honorable Mention
キーワード
Industrial Augmented Reality
Projection-based Augmented Reality
Assembly
Training
Assistive System
Empirical Study
Experiment
著者
Sebastian Büttner
Clausthal University of Technology & OWL University of Applied Sciences and Arts, Clausthal-Zellerfeld & Lemgo, Germany
Michael Prilla
Clausthal University of Technology, Clausthal-Zellerfeld, Germany
Carsten Röcker
OWL University of Applied Sciences and Arts & Fraunhofer IOSB-INA, Lemgo, Germany
DOI

10.1145/3313831.3376720

論文URL

https://doi.org/10.1145/3313831.3376720

Augmented Reality to Enable Users in Learning Case Grammar from Their Real-World Interactions
要旨

Augmented Reality (AR) provides a unique opportunity to situate learning content in one's environment. In this work, we investigated how AR could be developed to provide an interactive context-based language learning experience. Specifically, we developed a novel handheld-AR app for learning case grammar by dynamically creating quizzes, based on real-life objects in the learner's surroundings. We compared this to the experience of learning with a non-contextual app that presented the same quizzes with static photographic images. Participants found AR suitable for use in their everyday lives and enjoyed the interactive experience of exploring grammatical relationships in their surroundings. Nonetheless, Bayesian tests provide substantial evidence that the interactive and context-embedded AR app did not improve case grammar skills, vocabulary retention, and usability over the experience with equivalent static images. Based on this, we propose how language learning apps could be designed to combine the benefits of contextual AR and traditional approaches.

受賞
Honorable Mention
キーワード
Augmented Reality
Language Learning
Grammar
Contextual Learning
Self-Directed Learning
著者
Fiona Draxler
Ludwig Maximilian University of Munich, München, Germany
Audrey Labrie
Polytechnique Montréal, Montreal, Canada
Albrecht Schmidt
Ludwig Maximilian University of Munich, Munich, Germany
Lewis L. Chuang
Ludwig Maximilian University of Munich, Munich, Germany
DOI

10.1145/3313831.3376537

論文URL

https://doi.org/10.1145/3313831.3376537

Bridging the Virtual and Real Worlds: A Preliminary Study of Messaging Notifications in Virtual Reality
要旨

Virtual reality (VR) platforms provide their users with immersive virtual environments, but disconnect them from real-world events. The increasing length of VR sessions can therefore be expected to boost users' needs to obtain information about external occurrences such as message arrival. Yet, how and when to present these real-world notifications to users engaged in VR activities remains underexplored. We conducted an experiment to investigate individuals' receptivity during four VR activities (Loading, 360 Video, Treasure Hunt, Rhythm Game) to message notifications delivered using three types of displays (head-mounted, controller, and movable panel). While higher engagement generally led to higher perceptions that notifications were ill-timed and/or disruptive, the suitability of notification displays to VR activities was influenced by the time-sensitiveness of VR content, overlapping use of modalities for delivering alerts, the display locations, and a requirement that the display be moved for notifications to be seen. Specific design suggestions are also provided.

キーワード
Virtual reality
notification systems
interruptibility
receptivity
eye-tracking
著者
Ching-Yu Hsieh
National Chiao Tung University, Hsinchu, Taiwan Roc
Yi-Shyuan Chiang
National Tsing Hua University, Hsinchu, Taiwan Roc
Hung-Yu Chiu
National Chiao Tung University, Hsinchu, Taiwan Roc
Yung-Ju Chang
National Chiao Tung University, Hsinchu, Taiwan Roc
DOI

10.1145/3313831.3376228

論文URL

https://doi.org/10.1145/3313831.3376228

A User Study on Mixed Reality Remote Collaboration with Eye Gaze and Hand Gesture Sharing
要旨

Supporting natural communication cues is critical for people to work together remotely and face-to-face. In this paper we present a Mixed Reality (MR) remote collaboration system that enables a local worker to share a live 3D panorama of his/her surroundings with a remote expert. The remote expert can also share task instructions back to the local worker using visual cues in addition to verbal communication. We conducted a user study to investigate how sharing augmented gaze and gesture cues from the remote expert to the local worker could affect the overall collaboration performance and user experience. We found that by combing gaze and gesture cues, our remote collaboration system could provide a significantly stronger sense of co-presence for both the local and remote users than using the gaze cue alone. The combined cues were also rated significantly higher than the gaze in terms of ease of conveying spatial actions.

キーワード
Mixed Reality
Augmented Reality
Virtual Reality
remote collaboration
3D panorama
scene reconstruction
eye gaze
hand gesture
著者
Huidong Bai
University of Auckland, Auckland, New Zealand
Prasanth Sasikumar
University of Auckland, Auckland, New Zealand
Jing Yang
ETH Zürich, Zürich, Switzerland
Mark Billinghurst
University of Auckland, Auckland, New Zealand
DOI

10.1145/3313831.3376550

論文URL

https://doi.org/10.1145/3313831.3376550

CollabAR – Investigating the Mediating Role of Mobile AR Interfaces on Co-Located Group Collaboration
要旨

Mobile Augmented Reality (AR) technology is enabling new applications for different domains including architecture, education or medical work. As AR interfaces project digital data, information and models into the real world, it allows for new forms of collaborative work. However, despite the wide availability of AR applications, very little is known about how AR interfaces mediate and shape collaborative practices. This paper presents a study which examines how a mobile AR (M-AR) interface for inspecting and discovering AR models of varying complexity impacts co-located group practices. We contribute new insights into how current mobile AR interfaces impact co-located collaboration. Our results show that M-AR interfaces induce high mental load and frustration, cause a high number of context switches between devices and group discussion, and overall leads to a reduction in group interaction. We present design recommendations for future work focusing on collaborative AR interfaces.

受賞
Honorable Mention
キーワード
Mobile augmented reality
co-located collaboration
著者
Thomas Wells
Lancaster University, Lancaster, Lancashire, United Kingdom
Steven Houben
Lancaster University, Lancaster, Lancashire, United Kingdom
DOI

10.1145/3313831.3376541

論文URL

https://doi.org/10.1145/3313831.3376541

動画