Interaction and Perception in Immersive Environments

会議の名前
CHI 2024
MAF: Exploring Mobile Acoustic Field for Hand-to-Face Gesture Interactions
要旨

We present MAF, a novel acoustic sensing approach that leverages the commodity hardware in bone conduction earphones for hand-to-face gesture interactions. Briefly, by shining audio signals with bone conduction earphones, we observe that these signals not only propagate along the surface of the human face but also dissipate into the air, creating an acoustic field that envelops the individual’s head. We conduct benchmark studies to understand how various hand-to-face gestures and human factors influence this acoustic field. Building on the insights gained from these initial studies, we then propose a deep neural network combined with signal preprocessing techniques. This combination empowers MAF to effectively detect, segment, and subsequently recognize a variety of hand-to-face gestures, whether in close contact with the face or above it. Our comprehensive evaluation based on 22 participants demonstrates that MAF achieves an average gesture recognition accuracy of 92% across ten different gestures tailored to users' preferences.

著者
Yongjie Yang
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
Tao Chen
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
Yujing Huang
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
Xiuzhen Guo
Zhejiang University, Hangzhou, China
Longfei Shangguan
University of Pittsburgh, Pittsburgh, Pennsylvania, United States
論文URL

doi.org/10.1145/3613904.3642437

動画
PhoneInVR: An Evaluation of Spatial Anchoring and Interaction Techniques for Smartphone Usage in Virtual Reality
要旨

When users wear a virtual reality (VR) headset, they lose access to their smartphone and accompanying apps. Past work has proposed smartphones as enhanced VR controllers, but little work has explored using existing smartphone apps and performing traditional smartphone interactions while in VR. In this paper, we consider three potential spatial anchorings for rendering smartphones in VR: On top of a tracked physical smartphone which the user holds (Phone-locked), on top of the user’s empty hand, as if holding a virtual smartphone (Hand-locked), or in a static position in front of the user (World-locked). We conducted a comparative study of target acquisition, swiping, and scrolling tasks across these anchorings using direct Touch or above-the-surface Pinch. Our findings indicate that physically holding a smartphone with Touch improves accuracy and speed for all tasks, and Pinch performed better with virtual smartphones. These findings provide a valuable foundation to enable smartphones in VR.

著者
Fengyuan Zhu
University of Toronto, Toronto, Ontario, Canada
Mauricio Sousa
University of Toronto, Toronto, Ontario, Canada
Ludwig Sidenmark
University of Toronto, Toronto, Ontario, Canada
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
論文URL

doi.org/10.1145/3613904.3642582

動画
Exploring Visualizations for Precisely Guiding Bare Hand Gestures in Virtual Reality
要旨

Bare hand interaction in augmented or virtual reality (AR/VR) systems, while intuitive, often results in errors and frustration. However, existing methods, such as a static icon or a dynamic tutorial, can only inform simple and coarse hand gestures and lack corrective feedback. This paper explores various visualizations for enhancing precise hand interaction in VR. Through a comprehensive two-part formative study with 11 participants, we identified four types of essential information for visual guidance and designed different visualizations that manifest these information types. We further distilled four visual designs and conducted a controlled lab study with 15 participants to assess their effectiveness for various single- and double-handed gestures. Our results demonstrate that visual guidance significantly improved users' gesture performance, reducing time and workload while increasing confidence. Moreover, we found that the visualization did not disrupt most users' immersive VR experience or their perceptions of hand tracking and gesture recognition reliability.

著者
Xizi Wang
University of Waterloo, Waterloo, Ontario, Canada
Ben Lafreniere
Meta, Toronto, Ontario, Canada
Jian Zhao
University of Waterloo, Waterloo, Ontario, Canada
論文URL

doi.org/10.1145/3613904.3642935

動画
Assessing the Influence of Visual Cues in Virtual Reality on the Spatial Perception of Physical Thermal Stimuli
要旨

Advancements in haptics for Virtual Reality (VR) increased the quality of immersive content. Particularly, recent efforts to provide realistic temperature sensations have gained traction, but most often require very specialized or large complex devices to create precise thermal actuations. However, being largely detached from the real world, such a precise correspondence between the physical location of thermal stimuli and the shown visuals in VR might not be necessary for an authentic experience. In this work, we contribute the findings of a controlled experiment with 20 participants, investigating the spatial localization accuracy of thermal stimuli while having matching and non-matching visual cues of a virtual heat source in VR. Although participants were highly confident in their localization decisions, their ability to accurately pinpoint thermal stimuli was notably deficient.

著者
Sebastian Günther
Technical University of Darmstadt, Darmstadt, Germany
Alexandra Skogseide
Technical University of Darmstadt, Darmstadt, Germany
Robin Buhlmann
Technical University of Darmstadt, Darmstadt, Germany
Max Mühlhäuser
TU Darmstadt, Darmstadt, Germany
論文URL

doi.org/10.1145/3613904.3642154

動画
Improving Electromyographic Muscle Response Times through Visual and Tactile Prior Stimulation in Virtual Reality
要旨

Electromyography (EMG) enables hands-free interactions by detecting muscle activity at different human body locations. Previous studies have demonstrated that input performance based on isometric contractions is muscle-dependent and can benefit from synchronous biofeedback. However, it remains unknown whether stimulation before interaction can help to localize and tense a muscle faster. In a response-based VR experiment (N=21), we investigated whether prior stimulation using visual or tactile cues at four different target muscles (biceps, triceps, upper leg, calf) can help reduce the time to perform isometric muscle contractions. The results show that prior stimulation decreases EMG reaction times with visual, vibrotactile, and electrotactile cues. Our experiment also revealed important findings regarding learning and fatigue at the different body locations. We provide qualitative insights into the participants' perceptions and discuss potential reasons for the improved interaction. We contribute with implications and use cases for prior stimulated muscle activation.

著者
Jessica Sehrt
Frankfurt University of Applied Sciences, Frankfurt, Germany
Leonardo Leite Ferreira
Frankfurt University of Applied Sciences, Frankfurt, Germany
Karsten Weyers
Frankfurt University of Applied Sciences, Frankfurt, Germany
Amir Mahmood
Frankfurt University of Applied Sciences, Frankfurt, Germany
Thomas Kosch
HU Berlin, Berlin, Germany
Valentin Schwind
Frankfurt University of Applied Sciences, Frankfurt, Germany
論文URL

doi.org/10.1145/3613904.3642091

動画