Head-Coupled Kinematic Template Matching: A Prediction Model for Ray Pointing in VR

要旨

This paper presents a new technique to predict the ray pointer landing position for selection movements in virtual reality (VR) environments. The technique adapts and extends a prior 2D kinematic template matching method to VR environments where ray pointers are used for selection. It builds on the insight that the kinematics of a controller and Head-Mounted Display (HMD) can be used to predict the ray's final landing position and angle. An initial study provides evidence that the motion of the head is a key input channel for improving prediction models. A second study validates this technique across a continuous range of distances, angles, and target sizes. On average, the technique's predictions were within 7.3° of the true landing position when 50% of the way through the movement and within 3.4° when 90%. Furthermore, compared to a direct extension of Kinematic Template Matching, which only uses controller movement, this head-coupled approach increases prediction accuracy by a factor of 1.8x when 40% of the way through the movement.

キーワード
Endpoint Prediction
Target Prediction
Virtual Reality
VR
Kinematics
Ray Pointing
Template Matching
著者
Rorik Henrikson
Chatham Labs, Toronto, ON, Canada
Tovi Grossman
University of Toronto, Toronto, ON, Canada
Sean Trowbridge
Facebook Reality Labs, Redmond, WA, USA
Daniel Wigdor
Chatham Labs & University of Toronto, Toronto, ON, Canada
Hrvoje Benko
Facebook Reality Labs, Redmond, WA, USA
DOI

10.1145/3313831.3376489

論文URL

https://doi.org/10.1145/3313831.3376489

動画

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: VR & physical real input

Paper session
311 KAUA'I
5 件の発表
2020-04-29 20:00:00
2020-04-29 21:15:00
日本語まとめ
読み込み中…