DEEP: 3D Gaze Pointing in Virtual Reality Leveraging Eyelid Movement

要旨

Gaze-based target suffers from low input precision and target occlusion. In this paper, we explored to leverage the continuous eyelid movement to support high-efficient and occlusion-robust dwell-based gaze pointing in virtual reality. We first conducted two user studies to examine the users' eyelid movement pattern both in unintentional and intentional conditions. The results proved the feasibility of leveraging intentional eyelid movement that was distinguishable with natural movements for input. We also tested the participants' dwelling pattern for targets with different sizes and locations. Based on these results, we propose DEEP, a novel technique that enables the users to see through occlusions by controlling the aperture angle of their eyelids and dwell to select the targets with the help of a probabilistic input prediction model. Evaluation results showed that DEEP with dynamic depth and location selection incorporation significantly outperformed its static variants, as well as a naive dwelling baseline technique. Even for 100% occluded targets, it could achieve an average selection speed of 2.5s with an error rate of 2.3%.

著者
Xin Yi
Tsinghua University, Beijing, China
Leping Qiu
Tsinghua University, Beijing, China
Wenjing Tang
Southeast University, Nanjing, China
Yehan Fan
Beijing University of Posts and Telecommunications, Beijing, China
Hewu Li
Tsinghua University, Beijing, China
Yuanchun Shi
Tsinghua University, Beijing, China
論文URL

https://doi.org/10.1145/3526113.3545673

会議: UIST 2022

The ACM Symposium on User Interface Software and Technology

セッション: XR Interaction

6 件の発表
2022-10-31 20:00:00
2022-10-31 21:30:00