Seeing and Touching the Air: Unraveling Eye-Hand Coordination in Mid-Air Gesture Typing for Mixed Reality

要旨

Mid-air text entry in mixed reality (MR) headsets has shown promise but remains less efficient than traditional input methods. While research has focused on improving typing performance, the mechanics of mid-air gesture typing, especially eye-hand coordination, are less understood. This paper investigates visuomotor coordination of mid-air gesture keyboards through a user study (n=16) comparing gesture typing on a tablet and in mid-air. Through an expert task we demonstrate that users were able to achieve a comparable text input performance. Our in-depth analysis of eye-hand coordination reveals significant differences in the eye-hand coordination patterns between gesture typing on a tablet and in-air. The mid-air gesture typing necessitates almost all of the visual attention on the keyboard area and a more consistent synchronization in eye-hand coordination to compensate for the increased motor and cognitive demands without physical boundaries. These insights provide important implications for the design of more efficient text input methods.

著者
Jinghui Hu
University of Cambridge, Cambridge, United Kingdom
John J. Dudley
University of Cambridge, Cambridge, United Kingdom
Per Ola Kristensson
University of Cambridge, Cambridge, United Kingdom
DOI

10.1145/3706598.3713743

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713743

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: XR

G302
7 件の発表
2025-04-29 18:00:00
2025-04-29 19:30:00
日本語まとめ
読み込み中…