GazeZoom: Exploration of Gaze-Assisted Multimodal Techniques for Panning and Zooming

要旨

Zooming and panning are fundamental input actions for exploring complex 2D and 3D scenes and data such as images, maps, and designs. Multi-touch zoom/pan interactions have been proven effective on mobile devices, and have been directly ported to HMDs, where they are typically accomplished by analogous but relatively large-scale movements of both hands. We argue that such motions are inefficient and induce fatigue and explore how the eye-tracking features of HMDs can be leveraged to achieve improvements. We evaluated three interaction techniques that combine gaze with two-handed, one-handed, and head-based input in a study (N=24) that contrasts them against a baseline two-handed technique. The results indicate that gaze-assisted two- and one-handed techniques outperform the baseline (17%-36% faster), while our head-based technique achieves similar performance to the Baseline but leaves the hands free for other tasks. We further developed a VR application demonstrating these techniques and validating their practical applicability.

著者
Yilong Lin
Southern University of Science and Technology, Shenzhen, China
Mingyu Han
KAIST, Daejeon, Korea, Republic of
Weitao Jiang
Southern University of Science and Technology, Shenzhen, China
Seungwoo Je
Southern University of Science and Technology, Shenzhen, China
Ian Oakley
KAIST, Daejeon, Korea, Republic of

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Gaze as Input

P1 - Room 124
6 件の発表
2026-04-15 18:00:00
2026-04-15 19:30:00