Virtual and Mixed Reality Interaction

会議の名前
CHI 2025
Estimating the Effects of Encumbrance and Walking on Mixed Reality Interaction
要旨

This paper investigates the effects of two situational impairments---encumbrance (i.e., carrying a heavy object) and walking---on interaction performance in canonical mixed reality tasks. We built Bayesian regression models of movement time, pointing offset, error rate, and throughput for target acquisition task, and throughput, UER, and CER for text entry task to estimate these effects. Our results indicate that 1.0 kg encumbrance increases selection movement time by 28%, decreases text entry throughput by 17%, and increase UER by 50%, but does not affect pointing offset. Walking led to a 63% increase in ray-cast movement time and a 51% reduction in text entry throughput. It also increased selection pointing offset by 16%, ray-cast pointing offset by 17%, and error rate by 8.4%. The interaction effect on 1.0 kg encumbrance and walking resulted in a 112% increase in ray-cast movement time. Our findings enhance the understanding of the effects of encumbrance and walking on mixed reality interaction, and contribute towards accumulating knowledge of situational impairments research in mixed reality.

受賞
Honorable Mention
著者
Tinghui Li
University of Sydney, Sydney, Australia
Eduardo Velloso
University of Sydney, Sydney, New South Wales, Australia
Anusha Withana
The University of Sydney, Sydney, NSW, Australia
Zhanna Sarsenbayeva
University of Sydney, Sydney, Australia
DOI

10.1145/3706598.3713492

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713492

動画
A Multimodal Approach for Targeting Error Detection in Virtual Reality Using Implicit User Behavior
要旨

Although the point-and-select interaction method has been shown to lead to user and system-initiated errors, it is still prevalent in VR scenarios. Current solutions to facilitate selection interactions exist, however they do not address the challenges caused by targeting inaccuracy. To reduce the effort required to target objects, we developed a model that quickly detected targeting errors after they occurred. The model used implicit multimodal user behavioral data to identify possible targeting outcomes. Using a dataset composed of 23 participants engaged in VR targeting tasks, we then trained a deep learning model to differentiate between correct and incorrect targeting events within 0.5 seconds of a selection, resulting in an AUC-ROC of 0.9. The utility of this model was then evaluated in a user study with 25 participants that identified that participants recovered from more errors and faster when assisted by the model. These results advance our understanding of targeting errors in VR and facilitate the design of future intelligent error-aware systems.

著者
Naveen Sendhilnathan
Meta, Seattle, Washington, United States
Ting Zhang
Meta Inc., Redmond, Washington, United States
David Bethge
Meta Inc., Redmond, Washington, United States
Michael Nebeling
Meta Inc., Redmond, Washington, United States
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
Tanya R.. Jonker
Meta Inc., Redmond, Washington, United States
DOI

10.1145/3706598.3713777

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713777

動画
On the Go with AR: Attention to Virtual and Physical Targets while Varying Augmentation Density
要旨

Augmented reality is projected to be a primary mode of information consumption on the go, seamlessly integrating virtual content into the \red{physical} world. However, the potential perceptual demands of viewing virtual annotations while a physical environment could impact user efficacy and safety, and the implications of these demands are not well understood. Here, we investigate the impact of virtual path guidance and augmentation density visual clutter on search performance and memory. Participants walked along a predefined path, searching for physical or virtual items. They experienced two levels of augmentation density, and either walked freely or with enforced speed and path guidance. Augmentation density impacted behavior and reduced awareness of uncommon objects in the environment. Analysis of search task performance and post-experiment item recall revealed} differing attention to physical and virtual objects. On the basis of these findings we outline considerations for AR apps designed for use on the go.

著者
You-Jin Kim
Texas A&M University, College Station, Texas, United States
Radha Kumaran
University of California, Santa Barbara, Santa Barbara, California, United States
Jingjing Luo
The Pingry School, Basking Ridge, New Jersey, United States
Tom Bullock
UC Santa Barbara, Santa Barbara, California, United States
Barry Giesbrecht
University of California -- Santa Barbara, Santa Barbara, California, United States
Tobias Höllerer
University of California, Santa Barbara, Santa Barbara, California, United States
DOI

10.1145/3706598.3714289

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714289

動画
HeadTurner: Enhancing Viewing Range and Comfort of using Virtual and Mixed-Reality Headsets while Lying Down via Assisted Shoulder and Head Actuation
要旨

Virtual and mixed reality headsets, such as the Apple Vision Pro and Meta Quest, began supporting use in reclined postures in 2024, accommodating users who prefer or require this position. However, the surfaces on which users rest restrict shoulder and head rotation, reducing viewing range and comfort. A formative study (n=16) comparing usage while standing vs. lying down showed that head rotation range decreased from 261º to 130º horizontally and from 172º to 94.9º vertically. To improve viewing range and comfort, we present HeadTurner, a novel approach that assists user-initiated head rotations by actuating the resting surface to yield in pitch and yaw axes. In a user study (n=16), HeadTurner significantly expanded the field of view and improved comfort compared to a fixed surface. Although VR sickness was slightly reduced with HeadTurner, the difference was not statistically significant. Overall, HeadTurner was preferred by 75% of participants. Although our proof-of-concept device was prototyped as a bed, the approach can be extended to more compact and affordable device form factors, such as motorized reclining chairs, offering the potential for comfortable use of VR and MR headsets over extended periods, and was shown to inspire users with interested applications in back-rested scenarios.

著者
En-Huei Wu
National Taiwan University, Taipei, Taiwan
Po-Yun Cheng
National Taiwan University, New Taipei City, Taiwan
Che-Wei Hsu
National Taiwan Univercity, Taipei City, Taiwan
Cheng Hsin Han
National Taiwan University, Taipai, Taiwan
Pei Chen Lee
HCI Lab, Taipei, Taiwan
Chia-An Fan
The University of Tokyo, Tokyo, Japan
Yu Chia Kuo
National Taiwan University, Taipei, Taiwan
Kai-Jing Hu
University Of Waterloo, Waterloo, Ontario, Canada
Yu Chen
National Taiwan University, Taipei, Taiwan
Mike Y.. Chen
National Taiwan University, Taipei, Taiwan
DOI

10.1145/3706598.3714214

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714214

動画
Modes of Interaction with Navigation Apps
要旨

Despite many HCI studies of diverse factors shaping users’ navigation experiences, how to design navigation systems to be adaptable to all of these factors remains a challenge. To address this challenge, we study general variations in users’ intended navigation experiences. Based on 30 interviews, we find that interactions with navigation apps can be subsumed under three “modes”: follow, modify, and background. For each mode of interaction, we highlight users’ key motivations, interactions with apps, and challenges. We propose these modes as higher-level concepts for exploring how to enable the details of navigation support to be adaptable to users’ generally intended navigation experiences. We discuss broader implications for issues of efficiency and overreliance in our experience of the physical environments through navigation apps.

著者
Ju Yeon Jung
KAIST, Daejeon, Korea, Republic of
Tom Steinberger
KAIST, Daejeon, Korea, Republic of
DOI

10.1145/3706598.3714180

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714180

Exploring Joint Effects of Locomotion Continuity and Wayfinding Assistance in Non-Embodied VR Game Navigation
要旨

Navigation is a crucial part of the VR game experience. However, discrete and continuous optic flow from locomotion techniques (DL and CL) and wayfinding assistance in VR games may impact players' navigation differently. Limited research has explored how DL and CL's influence on navigation changes across different wayfinding assistance conditions. This study employed a 2×3 factorial between-subjects experiment with 78 participants to investigate their joint effects. The study explores explanations for observed differences among conditions from the mixed-method analysis of quantitative data (game performance, spatial learning performance, pressure level, usability, and sickness) and thematic analysis of post-hoc interviews. Therefore, the study identifies three key factors—exploration strategy, attention, and spatial knowledge as explanations. Designers can leverage these insights to improve navigation support in VR games, with broader potential applications in fields such as healthcare and training.

著者
Ruowen Niu
Tsinghua University, Beijing, China
Ruishen Zheng
Tsinghua University, Beijing, China
Chen Liang
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, Guangdong, China
Minghui Liu
Tsinghua university, Beijing, China
DOI

10.1145/3706598.3713766

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713766

動画
Juggling Extra Limbs: Identifying Control Strategies for Supernumerary Multi-Arms in Virtual Reality
要旨

Using supernumerary multi-limbs for complex tasks is a growing research focus in Virtual Reality (VR) and robotics. Understanding how users integrate extra limbs with their own to achieve shared goals is crucial for developing efficient supernumeraries. This paper presents an exploratory user study (N=14) investigating strategies for controlling virtual supernumerary limbs with varying autonomy levels in VR object manipulation tasks. Using a Wizard-of-Oz approach to simulate semi-autonomous limbs, we collected both qualitative and quantitative data. Results show participants adapted control strategies based on task complexity and system autonomy, affecting task delegation, coordination, and body ownership. Based on these findings, we propose guidelines—commands, demonstration, delegation, and labeling instructions—to improve multi-limb interaction design by adapting autonomy to user needs and fostering better context-aware experiences.

受賞
Honorable Mention
著者
Hongyu Zhou
The University of Sydney, Sydney, NSW, Australia
Tom Kip
The University of Sydney, Sydney, NSW, Australia
Yihao Dong
The University of Sydney, Camperdown, NSW, Australia
Andrea Bianchi
KAIST, Daejeon, Korea, Republic of
Zhanna Sarsenbayeva
The University of Sydney, Sydney, NSW, Australia
Anusha Withana
The University of Sydney, Sydney, NSW, Australia
DOI

10.1145/3706598.3713647

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713647

動画