Gaze Interaction in Immersive Environments

会議の名前
CHI 2024
Towards an Eye-Brain-Computer Interface: Combining Gaze with the Stimulus-Preceding Negativity for Target Selections in XR
要旨

Gaze-assisted interaction techniques enable intuitive selections without requiring manual pointing but can result in unintended selections, known as Midas touch. A confirmation trigger eliminates this issue but requires additional physical and conscious user effort. Brain-computer interfaces (BCIs), particularly passive BCIs harnessing anticipatory potentials such as the Stimulus-Preceding Negativity (SPN) - evoked when users anticipate a forthcoming stimulus - present an effortless implicit solution for selection confirmation. Within a VR context, our research uniquely demonstrates that SPN has the potential to decode intent towards the visually focused target. We reinforce the scientific understanding of its mechanism by addressing a confounding factor - we demonstrate that the SPN is driven by the user's intent to select the target, not by the stimulus feedback itself. Furthermore, we examine the effect of familiarly placed targets, finding that SPN may be evoked quicker as users acclimatize to target locations; a key insight for everyday BCIs.

著者
G S Rajshekar Reddy
University of Colorado Boulder, Boulder, Colorado, United States
Michael J. Proulx
Meta Reality Labs Research, Redmond, Washington, United States
Leanne Hirshfield
University of Colorado, Boulder, Colorado, United States
Anthony Ries
DEVCOM Army Research Laboratory, Aberdeen Proving Ground, Maryland, United States
論文URL

doi.org/10.1145/3613904.3641925

動画
Gaze on the Go: Effect of Spatial Reference Frame on Visual Target Acquisition During Physical Locomotion in Extended Reality
要旨

Spatial interaction relies on fast and accurate visual acquisition. In this work, we analyse how visual acquisition and tracking of targets presented in a head-mounted display is affected by the user moving linearly at walking and jogging paces. We study four reference frames in which targets can be presented: Head and World where targets are affixed relative to the head and environment, respectively; HeadDelay where targets are presented in the head coordinate system but follow head movement with a delay, and novel Path where targets remain at fixed distance in front of the user, in the direction of their movement. Results of our study in virtual reality demonstrate that the more stable the target is relative to the environment, the faster and more precise it can be fixated. The results have practical significance as head-mounted displays enable interaction during mobility, and in particular when eye tracking is considered as input.

著者
Pavel Manakhov
Aarhus University, Aarhus, Denmark
Ludwig Sidenmark
University of Toronto, Toronto, Ontario, Canada
Ken Pfeuffer
Aarhus University, Aarhus, Denmark
Hans Gellersen
Lancaster University, Lancaster, United Kingdom
論文URL

doi.org/10.1145/3613904.3642915

動画
MOSion: Gaze Guidance with Motion-triggered Visual Cues by Mosaic Patterns
要旨

We propose a gaze-guiding method called MOSion to adjust the guiding strength reacted to observers’ motion based on a high-speed projector and the afterimage effect in the human vision system. Our method decomposes the target area into mosaic patterns to embed visual cues in the perceived images. The patterns can only direct the attention of the moving observers to the target area. The stopping observer can see the original image with little distortion because of light integration in the visual perception. The pre computation of the patterns provides the adaptive guiding effect without tracking devices and computational costs depending on the movements. The evaluation and the user study show that the mosaic decomposition enhances the perceived saliency with a few visual artifacts, especially in moving conditions. Our method embedded in white lights works in various situations such as planar posters, advertisements, and curved objects.

著者
Arisa Kohtani
Tokyo Institute of Technology, Tokyo, Japan
Shio Miyafuji
Tokyo Institute of Technology, Tokyo, Japan
Keishiro Uragaki
Aoyama Gakuin University, Tokyo, Japan
Hidetaka Katsuyama
Tokyo Institute of Technology, Tokyo, Japan
Hideki Koike
Tokyo Institute of Technology, Tokyo, Japan
論文URL

doi.org/10.1145/3613904.3642577

動画
FocusFlow: 3D Gaze-Depth Interaction in Virtual Reality Leveraging Active Visual Depth Manipulation
要旨

Gaze interaction presents a promising avenue in Virtual Reality (VR) due to its intuitive and efficient user experience. Yet, the depth control inherent in our visual system remains underutilized in current methods. In this study, we introduce FocusFlow, a hands-free interaction method that capitalizes on human visual depth perception within the 3D scenes of Virtual Reality. We first develop a binocular visual depth detection algorithm to understand eye input characteristics. We then propose a layer-based user interface and introduce the concept of "Virtual Window" that offers an intuitive and robust gaze-depth VR interaction, despite the constraints of visual depth accuracy and precision spatially at further distances. Finally, to help novice users actively manipulate their visual depth, we propose two learning strategies that use different visual cues to help users master visual depth control. Our user studies on 24 participants demonstrate the usability of our proposed virtual window concept as a gaze-depth interaction method. In addition, our findings reveal that the user experience can be enhanced through an effective learning process with adaptive visual cues, helping users to develop muscle memory for this brand-new input mechanism. We conclude the paper by discussing potential future research topics of gaze-depth interaction.

著者
Chenyang Zhang
University of Illinois at Urbana-Champaign, Champaign, Illinois, United States
Tiansu Chen
University of Illinois at Urbana-Champaign, Urbana, Illinois, United States
Eric Shaffer
University of Illinois at Urbana-Champaign, Urbana, Illinois, United States
Elahe Soltanaghai
University of Illinois urbana Champaign, Urbana, Illinois, United States
論文URL

doi.org/10.1145/3613904.3642589

動画
Snap, Pursuit and Gain: Virtual Reality Viewport Control by Gaze
要旨

Head-mounted displays let users explore virtual environments through a viewport that is coupled with head movement. In this work, we investigate gaze as an alternative modality for viewport control, enabling exploration of virtual worlds with less head movement. We designed three techniques that leverage gaze based on different eye movements: Dwell Snap for viewport rotation in discrete steps, Gaze Gain for amplified viewport rotation based on gaze angle, and Gaze Pursuit for central viewport alignment of gaze targets. All three techniques enable 360-degree viewport control through naturally coordinated eye and head movement. We evaluated the techniques in comparison with controller snap and head amplification baselines, for both coarse and precise viewport control, and found them to be as fast and accurate. We observed a high variance in performance which may be attributable to the different degrees to which humans tend to support gaze shifts with head movement.

著者
Hock Siang Lee
Lancaster, Lancaster, Lancashire, United Kingdom
Florian Weidner
Lancaster University, Lancaster, United Kingdom
Ludwig Sidenmark
University of Toronto, Toronto, Ontario, Canada
Hans Gellersen
Lancaster University, Lancaster, United Kingdom
論文URL

doi.org/10.1145/3613904.3642838

動画