TapGazer: Text Entry with Finger Tapping and Gaze-directed Word Selection

要旨

While using VR, efficient text entry is a challenge: users cannot easily locate standard physical keyboards, and keys are often out of reach, e.g.\ when standing. We present TapGazer, a text entry system where users type by tapping their fingers in place. Users can tap anywhere as long as the identity of each tapping finger can be detected with sensors. Ambiguity between different possible input words is resolved by selecting target words with gaze. If gaze tracking is unavailable, ambiguity is resolved by selecting target words with additional taps. We evaluated TapGazer for seated and standing VR: seated novice users using touchpads as tap surfaces reached 44.81 words per minute (WPM), 79.17% of their QWERTY typing speed. Standing novice users tapped on their thighs with touch-sensitive gloves, reaching 45.26 WPM (71.91%). We analyze TapGazer with a theoretical performance model and discuss its potential for text input in future AR scenarios.

著者
Zhenyi He
New York University, New York, New York, United States
Christof Lutteroth
University of Bath, Bath, United Kingdom
Ken Perlin
New York University, New York, New York, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501838

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Text & Pen

286–287
4 件の発表
2022-05-03 01:15:00
2022-05-03 02:30:00