StickyPie: A Gaze-Based, Scale-Invariant Marking Menu Optimized for AR/VR

要旨

This work explores the design of marking menus for gaze-based AR/VR menu selection by expert and novice users. It first identifies and explains the challenges inherent in ocular motor control and current eye tracking hardware, including overshooting, incorrect selections, and false activations. Through three empirical studies, we optimized and validated design parameters to mitigate these errors while reducing completion time, task load, and eye fatigue. Based on the findings from these studies, we derived a set of design guidelines to support gaze-based marking menus in AR/VR. To overcome the overshoot errors found with eye-based expert marking menu behaviour, we developed StickyPie, a marking menu technique that enables scale-independent marking input by estimating saccade landing positions. An evaluation of StickyPie revealed that StickyPie was easier to learn than the traditional technique (i.e., RegularPie) and was 10% more efficient after 3 sessions.

著者
Sunggeun Ahn
Chatham Labs, Toronto, Ontario, Canada
Stephanie Santosa
Chatham Labs, Toronto, Ontario, Canada
Mark Parent
Chatham Labs, Toronto, Ontario, Canada
Daniel Wigdor
Chatham Labs, Toronto, Ontario, Canada
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
Marcello Giordano
Chatham Labs, Toronto, Ontario, Canada
DOI

10.1145/3411764.3445297

論文URL

https://doi.org/10.1145/3411764.3445297

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Input / Spatial Interaction / Practice Support

[A] Paper Room 10, 2021-05-11 17:00:00~2021-05-11 19:00:00 / [B] Paper Room 10, 2021-05-12 01:00:00~2021-05-12 03:00:00 / [C] Paper Room 10, 2021-05-12 09:00:00~2021-05-12 11:00:00
Paper Room 10
13 件の発表
2021-05-11 17:00:00
2021-05-11 19:00:00
日本語まとめ
読み込み中…