A Multimodal Approach for Targeting Error Detection in Virtual Reality Using Implicit User Behavior

要旨

Although the point-and-select interaction method has been shown to lead to user and system-initiated errors, it is still prevalent in VR scenarios. Current solutions to facilitate selection interactions exist, however they do not address the challenges caused by targeting inaccuracy. To reduce the effort required to target objects, we developed a model that quickly detected targeting errors after they occurred. The model used implicit multimodal user behavioral data to identify possible targeting outcomes. Using a dataset composed of 23 participants engaged in VR targeting tasks, we then trained a deep learning model to differentiate between correct and incorrect targeting events within 0.5 seconds of a selection, resulting in an AUC-ROC of 0.9. The utility of this model was then evaluated in a user study with 25 participants that identified that participants recovered from more errors and faster when assisted by the model. These results advance our understanding of targeting errors in VR and facilitate the design of future intelligent error-aware systems.

著者
Naveen Sendhilnathan
Meta, Seattle, Washington, United States
Ting Zhang
Meta Inc., Redmond, Washington, United States
David Bethge
Meta Inc., Redmond, Washington, United States
Michael Nebeling
Meta Inc., Redmond, Washington, United States
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
Tanya R.. Jonker
Meta Inc., Redmond, Washington, United States
DOI

10.1145/3706598.3713777

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713777

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Virtual and Mixed Reality Interaction

Annex Hall F204
7 件の発表
2025-04-30 23:10:00
2025-05-01 00:40:00
日本語まとめ
読み込み中…