This paper investigates the effects of two situational impairments---encumbrance (i.e., carrying a heavy object) and walking---on interaction performance in canonical mixed reality tasks. We built Bayesian regression models of movement time, pointing offset, error rate, and throughput for target acquisition task, and throughput, UER, and CER for text entry task to estimate these effects. Our results indicate that 1.0 kg encumbrance increases selection movement time by 28%, decreases text entry throughput by 17%, and increase UER by 50%, but does not affect pointing offset. Walking led to a 63% increase in ray-cast movement time and a 51% reduction in text entry throughput. It also increased selection pointing offset by 16%, ray-cast pointing offset by 17%, and error rate by 8.4%. The interaction effect on 1.0 kg encumbrance and walking resulted in a 112% increase in ray-cast movement time. Our findings enhance the understanding of the effects of encumbrance and walking on mixed reality interaction, and contribute towards accumulating knowledge of situational impairments research in mixed reality.
https://dl.acm.org/doi/10.1145/3706598.3713492
Although the point-and-select interaction method has been shown to lead to user and system-initiated errors, it is still prevalent in VR scenarios. Current solutions to facilitate selection interactions exist, however they do not address the challenges caused by targeting inaccuracy. To reduce the effort required to target objects, we developed a model that quickly detected targeting errors after they occurred. The model used implicit multimodal user behavioral data to identify possible targeting outcomes. Using a dataset composed of 23 participants engaged in VR targeting tasks, we then trained a deep learning model to differentiate between correct and incorrect targeting events within 0.5 seconds of a selection, resulting in an AUC-ROC of 0.9. The utility of this model was then evaluated in a user study with 25 participants that identified that participants recovered from more errors and faster when assisted by the model. These results advance our understanding of targeting errors in VR and facilitate the design of future intelligent error-aware systems.
https://dl.acm.org/doi/10.1145/3706598.3713777
Augmented reality is projected to be a primary mode of information consumption on the go, seamlessly integrating virtual content into the \red{physical} world. However, the potential perceptual demands of viewing virtual annotations while a physical environment could impact user efficacy and safety, and the implications of these demands are not well understood. Here, we investigate the impact of virtual path guidance and augmentation density visual clutter on search performance and memory. Participants walked along a predefined path, searching for physical or virtual items. They experienced two levels of augmentation density, and either walked freely or with enforced speed and path guidance. Augmentation density impacted behavior and reduced awareness of uncommon objects in the environment. Analysis of search task performance and post-experiment item recall revealed} differing attention to physical and virtual objects. On the basis of these findings we outline considerations for AR apps designed for use on the go.
https://dl.acm.org/doi/10.1145/3706598.3714289
Virtual and mixed reality headsets, such as the Apple Vision Pro and Meta Quest, began supporting use in reclined postures in 2024, accommodating users who prefer or require this position. However, the surfaces on which users rest restrict shoulder and head rotation, reducing viewing range and comfort. A formative study (n=16) comparing usage while standing vs. lying down showed that head rotation range decreased from 261º to 130º horizontally and from 172º to 94.9º vertically. To improve viewing range and comfort, we present HeadTurner, a novel approach that assists user-initiated head rotations by actuating the resting surface to yield in pitch and yaw axes. In a user study (n=16), HeadTurner significantly expanded the field of view and improved comfort compared to a fixed surface. Although VR sickness was slightly reduced with HeadTurner, the difference was not statistically significant. Overall, HeadTurner was preferred by 75% of participants. Although our proof-of-concept device was prototyped as a bed, the approach can be extended to more compact and affordable device form factors, such as motorized reclining chairs, offering the potential for comfortable use of VR and MR headsets over extended periods, and was shown to inspire users with interested applications in back-rested scenarios.
https://dl.acm.org/doi/10.1145/3706598.3714214
Despite many HCI studies of diverse factors shaping users’ navigation experiences, how to design navigation systems to be adaptable to all of these factors remains a challenge. To address this challenge, we study general variations in users’ intended navigation experiences. Based on 30 interviews, we find that interactions with navigation apps can be subsumed under three “modes”: follow, modify, and background. For each mode of interaction, we highlight users’ key motivations, interactions with apps, and challenges. We propose these modes as higher-level concepts for exploring how to enable the details of navigation support to be adaptable to users’ generally intended navigation experiences. We discuss broader implications for issues of efficiency and overreliance in our experience of the physical environments through navigation apps.
Navigation is a crucial part of the VR game experience. However, discrete and continuous optic flow from locomotion techniques (DL and CL) and wayfinding assistance in VR games may impact players' navigation differently. Limited research has explored how DL and CL's influence on navigation changes across different wayfinding assistance conditions. This study employed a 2×3 factorial between-subjects experiment with 78 participants to investigate their joint effects. The study explores explanations for observed differences among conditions from the mixed-method analysis of quantitative data (game performance, spatial learning performance, pressure level, usability, and sickness) and thematic analysis of post-hoc interviews. Therefore, the study identifies three key factors—exploration strategy, attention, and spatial knowledge as explanations. Designers can leverage these insights to improve navigation support in VR games, with broader potential applications in fields such as healthcare and training.
https://dl.acm.org/doi/10.1145/3706598.3713766
Using supernumerary multi-limbs for complex tasks is a growing research focus in Virtual Reality (VR) and robotics. Understanding how users integrate extra limbs with their own to achieve shared goals is crucial for developing efficient supernumeraries. This paper presents an exploratory user study (N=14) investigating strategies for controlling virtual supernumerary limbs with varying autonomy levels in VR object manipulation tasks. Using a Wizard-of-Oz approach to simulate semi-autonomous limbs, we collected both qualitative and quantitative data. Results show participants adapted control strategies based on task complexity and system autonomy, affecting task delegation, coordination, and body ownership. Based on these findings, we propose guidelines—commands, demonstration, delegation, and labeling instructions—to improve multi-limb interaction design by adapting autonomy to user needs and fostering better context-aware experiences.
https://dl.acm.org/doi/10.1145/3706598.3713647