Gaze-Supported 3D Object Manipulation in Virtual Reality

要旨

This paper investigates integration, coordination, and transition strategies of gaze and hand input for 3D object manipulation in VR. Specifically, this work aims to understand whether incorporating gaze input can benefit VR object manipulation tasks, and how it should be combined with hand input for improved usability and efficiency. We designed four gaze-supported techniques that leverage different combination strategies for object manipulation and evaluated them in two user studies. Overall, we show that gaze did not offer significant performance benefits for transforming objects in the primary working space, where all objects were located in front of the user and within the arm-reach distance, but can be useful for a larger environment with distant targets. We further offer insights regarding combination strategies of gaze and hand input, and derive implications that can help guide the design of future VR systems that incorporate gaze input for 3D object manipulation.

著者
Difeng Yu
The University of Melbourne, Melbourne, VIC, Australia
Xueshi Lu
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Rongkai Shi
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Hai-Ning Liang
Xi'an Jiaotong-Liverpool University, Suzhou, Jiangsu, China
Tilman Dingler
The University of Melbourne, Melbourne, VIC, Australia
Eduardo Velloso
The University of Melbourne, Melbourne, VIC, Australia
Jorge Goncalves
The University of Melbourne, Melbourne, VIC, Australia
DOI

10.1145/3411764.3445343

論文URL

https://doi.org/10.1145/3411764.3445343

動画

会議: CHI 2021

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2021.acm.org/)

セッション: Input / Spatial Interaction / Practice Support

[A] Paper Room 10, 2021-05-11 17:00:00~2021-05-11 19:00:00 / [B] Paper Room 10, 2021-05-12 01:00:00~2021-05-12 03:00:00 / [C] Paper Room 10, 2021-05-12 09:00:00~2021-05-12 11:00:00
Paper Room 10
13 件の発表
2021-05-11 17:00:00
2021-05-11 19:00:00
日本語まとめ
読み込み中…