Online-EYE: Multimodal Implicit Eye Tracking Calibration for XR

要旨

Unlike other inputs for extended reality (XR) that work out of the box, eye tracking typically requires custom calibration per user or session. We present a multimodal inputs approach for implicit calibration of eye tracker in VR, leveraging UI interaction for continuous, background calibration. Our method analyzes gaze data alongside controller interaction with UI elements, and employing ML techniques it continuously refines the calibration matrix without interrupting users from their current tasks. Potentially eliminating the need for explicit calibration. We demonstrate the accuracy and effectiveness of this implicit approach across various tasks and real time applications achieving comparable eye tracking accuracy to native, explicit calibration. While our evaluation focuses on VR and controller-based interactions, we anticipate the broader applicability of this approach to various XR devices and input modalities.

著者
Baosheng James HOU
Google, Seattle, Washington, United States
Lucy Abramyan
Google, Mountain View, California, United States
Prasanthi Gurumurthy
Google, Mountain View, California, United States
Haley Adams
Google, Mountain View, California, United States
Ivana Tosic Rodgers
Google, Mountain View, California, United States
Eric J. Gonzalez
Google, Seattle, Washington, United States
Khushman Patel
Google Inc, Mountain View, California, India
Andrea Colaço
Google, Mountain View, California, United States
Ken Pfeuffer
Aarhus University, Aarhus, Denmark
Hans Gellersen
Lancaster University, Lancaster, United Kingdom
Karan Ahuja
Google, Seattle, Washington, United States
Mar Gonzalez-Franco
Google, Seattle, Washington, United States
DOI

10.1145/3706598.3713461

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713461

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Immersive Touch and Gesture Interaction

G303
7 件の発表
2025-04-30 20:10:00
2025-04-30 21:40:00
日本語まとめ
読み込み中…