Vergence Matching: Inferring Attention to Objects in 3D Environments for Gaze-Assisted Selection
説明

Gaze pointing is the de facto standard to infer attention and interact in 3D environments but is limited by motor and sensor limitations. To circumvent these limitations, we propose a vergence-based motion correlation method to detect visual attention toward very small targets. Smooth depth movements relative to the user are induced on 3D objects, which cause slow vergence eye movements when looked upon. Using the principle of motion correlation, the depth movements of the object and vergence eye movements are matched to determine which object the user is focussing on. In two user studies, we demonstrate how the technique can reliably infer gaze attention on very small targets, systematically explore how different stimulus motions affect attention detection, and show how the technique can be extended to multi-target selection. Finally, we provide example applications using the concept and design guidelines for small target and accuracy-independent attention detection in 3D environments.

日本語まとめ
読み込み中…
読み込み中…
A Fitts' Law Study of Gaze-Hand Alignment for Selection in 3D User Interfaces
説明

Gaze-Hand Alignment has recently been proposed for multimodal selection in 3D. The technique takes advantage of gaze for target pre-selection, as it naturally precedes manual input. Selection is then completed when manual input aligns with gaze on the target, without need for an additional click method. In this work we evaluate two alignment techniques, Gaze&Finger and Gaze&Handray, combining gaze with image plane pointing versus raycasting, in comparison with hands-only baselines and Gaze&Pinch as established multimodal technique. We used Fitts' Law study design with targets presented at different depths in the visual scene, to assess effect of parallax on performance. The alignment techniques outperformed their respective hands-only baselines. Gaze&Finger is efficient when targets are close to the image plane but less performant with increasing target depth due to parallax.

日本語まとめ
読み込み中…
読み込み中…
I Need a Third Arm! Eliciting Body-based Interactions with a Wearable Robotic Arm
説明

Wearable robotic arms (WRA) open up a unique interaction space that closely integrates the user's body with an embodied robotic collaborator. This space affords diverse interaction styles, including body movement, hand gestures, or gaze. Yet, it is so-far unexplored which commands are desirable from a user perspective. Contributing findings from an elicitation study (N=14), we provide a comprehensive set of interactions for basic robot control, navigation, object manipulation, and emergency situations, performed when hands are free or occupied. Our study provides insights into preferred body parts, input modalities, and the users' underlying sources of inspiration. Comparing interaction styles between WRAs and off-body robots, we highlight how WRAs enable a range of interactions specific for on-body robots and how users use WRAs both as tools and as collaborators. We conclude by providing guidance on the design of ad-hoc interaction with WRAs informed by user behavior.

日本語まとめ
読み込み中…
読み込み中…
Classifying Head Movements to Separate Head-Gaze and Head Gestures as Distinct Modes of Input
説明

Head movement is widely used as a uniform type of input for human-computer interaction. However, there are fundamental differences between head movements coupled with gaze in support of our visual system, and head movements performed as gestural expression. Both Head-Gaze and Head Gestures are of utility for interaction but differ in their affordances. To facilitate the treatment of Head-Gaze and Head Gestures as separate types of input, we developed HeadBoost as a novel classifier, achieving high accuracy in classifying gaze-driven versus gestural head movement (F1-Score: 0.89). We demonstrate the utility of the classifier with three applications: gestural input while avoiding unintentional input by Head-Gaze; target selection with Head-Gaze while avoiding Midas Touch by head gestures; and switching of cursor control between Head-Gaze for fast positioning and Head Gesture for refinement. The classification of Head-Gaze and Head Gesture allows for seamless head-based interaction while avoiding false activation.

日本語まとめ
読み込み中…
読み込み中…
Embodying Physics-Aware Avatars in Virtual Reality
説明

Embodiment toward an avatar in virtual reality (VR) is generally stronger when there is a high degree of alignment between the user's and self-avatar's motion. However, one-to-one mapping between the two is not always ideal when user interacts with the virtual environment. On these occasions, the user input often leads to unnatural behavior without physical realism (e.g., objects penetrating virtual body, body unmoved by hitting stimuli). We investigate how adding physics correction to self-avatar motion impacts embodiment. Physics-aware self-avatar preserves the physical meaning of the movement but introduces discrepancies between the user's and self-avatar's motion, whose contingency is a determining factor for embodiment. To understand its impact, we conducted an in-lab study (n = 20) where participants interacted with obstacles on their upper bodies in VR with and without physics correction. Our results showed that, rather than compromising embodiment level, physics-responsive self-avatar improved embodiment compared to no-physics condition in both active and passive interactions.

日本語まとめ
読み込み中…
読み込み中…
Induce a Blink of the Eye: Evaluating Techniques for Triggering Eye Blinks in Virtual Reality
説明

As more and more virtual reality (VR) headsets support eye tracking, recent techniques started to use eye blinks to induce unnoticeable manipulations to the virtual environment, e.g., to redirect users' actions. However, to exploit their full potential, more control over users' blinking behavior in VR is required. To this end, we propose a set of reflex-based blink triggers that are suited specifically for VR. In accordance with blink-based techniques for redirection, we formulate (i) effectiveness, (ii) efficiency, (iii) reliability, and (iv) unobtrusiveness as central requirements for successful triggers. We implement the soft- and hardware-based methods and compare the four most promising approaches in a user study. Our results highlight the pros and cons of the tested triggers, and show those based on the menace, corneal, and dazzle reflexes to perform best. From these results, we derive recommendations that help choosing suitable blink triggers for VR applications.

日本語まとめ
読み込み中…
読み込み中…