Predicting Gaze-based Target Selection in Augmented Reality Headsets based on Eye and Head Endpoint Distributions

要旨

Target selection is a fundamental task in interactive Augmented Reality (AR) systems. Predicting the intended target of selection in such systems can provide users with a smooth, low-friction interaction experience. Our work aims to predict gaze-based target selection in AR headsets with eye and head endpoint distributions, which describe the probability distribution of eye and head 3D orientation when a user triggers a selection input. We first conducted a user study to collect users’ eye and head behavior in a gaze-based pointing selection task with two confirmation mechanisms (air tap and blinking). Based on the study results, we then built two models: a unimodal model using only eye endpoints and a multimodal model using both eye and head endpoints. Results from a second user study showed that the pointing accuracy is improved by approximately 32% after integrating our models into gaze-based selection techniques.

著者
Yushi Wei
Xi'an Jiaotong-Liverpool University, Suzhou, China
Rongkai Shi
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Difeng Yu
University of Melbourne, Melbourne, Victoria, Australia
Yihong Wang
Xi'an Jiaotong-Liverpool University, Suzhou, China
Yue Li
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Lingyun Yu
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Hai-Ning Liang
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
論文URL

https://doi.org/10.1145/3544548.3581042

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: GUIs, Gaze, and Gesture-based Interaction

Hall C
6 件の発表
2023-04-25 18:00:00
2023-04-25 19:30:00