Interacting with VR

会議の名前
CHI 2022
In-Depth Mouse: Integrating Desktop Mouse into Virtual Reality
要旨

Virtual Reality (VR) has potential for productive knowledge work, however, midair pointing with controllers or hand gestures does not offer the precision and comfort of traditional 2D mice. Directly integrating mice into VR is difficult as selecting targets in a 3D space is negatively impacted by binocular rivalry, perspective mismatch, and improperly calibrated control-display (CD) gain. To address these issues, we developed Depth-Adaptive Cursor, a 2D-mouse driven pointing technique for 3D selection with depth-adaptation that continuously interpolates the cursor depth by inferring what users intend to select based on the cursor position, the viewpoint, and the selectable objects. Depth-Adaptive Cursor uses a novel CD gain tool to compute a usable range of CD gains for general mouse-based pointing in VR. A user study demonstrated that Depth-Adaptive Cursor significantly improved performance compared with an existing mouse-based pointing technique without depth-adaption in terms of time (21.2%), error (48.3%), perceived workload, and user satisfaction.

受賞
Honorable Mention
著者
Qian Zhou
Autodesk Research, Toronto, Ontario, Canada
George Fitzmaurice
Autodesk Research, Toronto, Ontario, Canada
Fraser Anderson
Autodesk Research, Toronto, Ontario, Canada
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501884

動画
OVRlap: Perceiving Multiple Locations Simultaneously to Improve Interaction in VR
要旨

We introduce OVRlap, a VR interaction technique that lets the user perceive multiple places simultaneously from a first-person perspective. OVRlap achieves this by overlapping viewpoints. At any time, only one viewpoint is active, meaning that the user may interact with objects therein. Objects seen from the active viewpoint are opaque, whereas objects seen from passive viewpoints are transparent. This allows users to perceive multiple locations at once and easily switch to the one in which they want to interact. We compare OVRlap and a single-viewpoint technique in a study where 20 participants complete object-collection and monitoring tasks. We find that participants are significantly faster and move their head significantly less with OVRlap in both tasks. We propose how the technique might be improved through automated switching of the active viewpoint and intelligent viewpoint rendering.

受賞
Honorable Mention
著者
Jonas Schjerlund
University of Copenhagen, Copenhagen, Denmark
Kasper Hornbæk
University of Copenhagen, Copenhagen, Denmark
Joanna Bergström
University of Copenhagen, Copenhagen, Denmark
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501873

動画
ImpactVest: Rendering Spatio-Temporal Multilevel Impact Force Feedback on Body in VR
要旨

Rendering instant and intense impact feedback on users’ hands, limbs and head to enhance realism in virtual reality (VR) has been proposed in previous works, but impact on the body is still less discussed. With the body’s large surface area to utilize, numerous impact patterns can be rendered in versatile VR applications,e.g., being shot, blasted, punched or slashed on body in VR games. Herein we propose ImpactVest to render spatio-temporal multilevel impact force feedback on body. By independently controlling nine impactors in a 3×3 layout using elastic force, impact is generated at different levels, positions and time sequences for versatile spatial and temporal combinations. We conducted a just-noticeable difference (JND) study to understand users’ impact level distinguishability on the body. A time interval threshold study was then performed to ascertain what time interval thresholds between two impact stimuli should be used to distinguish from simultaneous impact, a continuous impact stroke and two discrete impact stimuli. Based on the results, we conducted a VR experience study to verify that impact feedback from ImpactVest enhances VR realism.

著者
Hsin-Ruey Tsai
National Chengchi University, Taipei, Taiwan
Yu-So Liao
National Chengchi University, Taipei, Taiwan
Chieh Tsai
National Chengchi University, Taipei, Taiwan
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501971

動画
Kuiper Belt: Utilizing the "Out-of-natural Angle" Region in the Eye-gaze Interaction for Virtual Reality
要旨

The maximum physical range of horizontal human eye movement is approximately $45^\circ$. However, in a natural gaze shift, the difference in the direction of the gaze relative to the frontal direction of the head rarely exceeds $25^\circ$. We name this region of $25^\circ - 45^\circ$ the ``Kuiper Belt'' in the eye-gaze interaction. We try to utilize this region to solve the Midas touch problem to enable a search task while reducing false input in the Virtual Reality environment. In this work, we conduct two studies to figure out the design principle of how we place menu items in the Kuiper Belt as an ``out-of-natural angle'' region of the eye-gaze movement, and determine the effectiveness and workload of the Kuiper Belt-based method. The results indicate that the Kuiper Belt-based method facilitated the visual search task while reducing false input. Finally, we present example applications utilizing the findings of these studies.

著者
Myungguen Choi
Hokkaido University, Sapporo, Japan
Daisuke Sakamoto
Hokkaido University, Sapporo, Japan
Tetsuo Ono
Hokkaido University, Sapporo, Japan
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517725

動画
Beyond Being Real: A Sensorimotor Control Perspective on Interactions in Virtual Reality
要旨

We can create Virtual Reality (VR) interactions that have no equivalent in the real world by remapping spacetime or altering users' body representation, such as stretching the user’s virtual arm for manipulation of distant objects or scaling up the user’s avatar to enable rapid locomotion. Prior research has leveraged such approaches, what we call beyond-real techniques, to make interactions in VR more practical, efficient, ergonomic, and accessible. We present a survey categorizing prior movement-based VR interaction literature as reality-based, illusory, or beyond-real interactions. We survey relevant conferences (CHI, IEEE VR, VRST, UIST, and DIS) while focusing on selection, manipulation, locomotion, and navigation in VR. For beyond-real interactions, we describe the transformations that have been used by prior works to create novel remappings. We discuss open research questions through the lens of the human sensorimotor control system and highlight challenges that need to be addressed for effective utilization of beyond-real interactions in future VR applications, including plausibility, control, long-term adaptation, and individual differences.

著者
Parastoo Abtahi
Stanford University, Stanford, California, United States
Sidney Hough
Stanford University, Stanford, California, United States
James A.. Landay
Stanford University, Stanford, California, United States
Sean Follmer
Stanford University, Stanford, California, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517706

動画