Augmented Reality (AR) systems are on their way to industrial application, e.g. projection-based AR is used to enhance assembly work. Previous studies showed advantages of the systems in permanent-use scenarios, such as faster assembly times. In this paper, we investigate whether such systems are suitable for training purposes. Within an experiment, we observed the training with a projection-based AR system over multiple sessions and compared it with a personal training and a paper manual training. Our study shows that projection-based AR systems offer only small benefits in the training scenario. While a systematic mislearning of content is prevented through immediate feedback, our results show that the AR training does not reach the personal training in terms of speed and recall precision after 24 hours. Furthermore, we show that once an assembly task is properly trained, there are no differences in the long-term recall precision, regardless of the training method.
Augmented Reality (AR) provides a unique opportunity to situate learning content in one's environment. In this work, we investigated how AR could be developed to provide an interactive context-based language learning experience. Specifically, we developed a novel handheld-AR app for learning case grammar by dynamically creating quizzes, based on real-life objects in the learner's surroundings. We compared this to the experience of learning with a non-contextual app that presented the same quizzes with static photographic images. Participants found AR suitable for use in their everyday lives and enjoyed the interactive experience of exploring grammatical relationships in their surroundings. Nonetheless, Bayesian tests provide substantial evidence that the interactive and context-embedded AR app did not improve case grammar skills, vocabulary retention, and usability over the experience with equivalent static images. Based on this, we propose how language learning apps could be designed to combine the benefits of contextual AR and traditional approaches.
Virtual reality (VR) platforms provide their users with immersive virtual environments, but disconnect them from real-world events. The increasing length of VR sessions can therefore be expected to boost users' needs to obtain information about external occurrences such as message arrival. Yet, how and when to present these real-world notifications to users engaged in VR activities remains underexplored. We conducted an experiment to investigate individuals' receptivity during four VR activities (Loading, 360 Video, Treasure Hunt, Rhythm Game) to message notifications delivered using three types of displays (head-mounted, controller, and movable panel). While higher engagement generally led to higher perceptions that notifications were ill-timed and/or disruptive, the suitability of notification displays to VR activities was influenced by the time-sensitiveness of VR content, overlapping use of modalities for delivering alerts, the display locations, and a requirement that the display be moved for notifications to be seen. Specific design suggestions are also provided.
Supporting natural communication cues is critical for people to work together remotely and face-to-face. In this paper we present a Mixed Reality (MR) remote collaboration system that enables a local worker to share a live 3D panorama of his/her surroundings with a remote expert. The remote expert can also share task instructions back to the local worker using visual cues in addition to verbal communication. We conducted a user study to investigate how sharing augmented gaze and gesture cues from the remote expert to the local worker could affect the overall collaboration performance and user experience. We found that by combing gaze and gesture cues, our remote collaboration system could provide a significantly stronger sense of co-presence for both the local and remote users than using the gaze cue alone. The combined cues were also rated significantly higher than the gaze in terms of ease of conveying spatial actions.
Mobile Augmented Reality (AR) technology is enabling new applications for different domains including architecture, education or medical work. As AR interfaces project digital data, information and models into the real world, it allows for new forms of collaborative work. However, despite the wide availability of AR applications, very little is known about how AR interfaces mediate and shape collaborative practices. This paper presents a study which examines how a mobile AR (M-AR) interface for inspecting and discovering AR models of varying complexity impacts co-located group practices. We contribute new insights into how current mobile AR interfaces impact co-located collaboration. Our results show that M-AR interfaces induce high mental load and frustration, cause a high number of context switches between devices and group discussion, and overall leads to a reduction in group interaction. We present design recommendations for future work focusing on collaborative AR interfaces.
https://doi.org/10.1145/3313831.3376541