Gripmarks: Using Hand Grips to Transform In-Hand Objects into Mixed Reality Input

要旨

We introduce Gripmarks, a system that enables users to opportunistically use objects they are already holding as input surfaces for mixed reality head-mounted displays (HMD). Leveraging handheld objects reduces the need for users to free up their hands or acquire a controller to interact with their HMD. Gripmarks associate a particular hand grip with the shape primitive of the physical object without the need of object recognition or instrumenting the object. From the grip pose and shape primitive we can infer the surface of the object. With an activation gesture, we can enable the object for use as input to the HMD. With five gripmarks we demonstrate a recognition rate of 94.2%; we show that our grip detection benefits from the physical constraints of holding an object. We explore two categories of input objects 1) tangible surfaces and 2) tangible tools and present two representative applications. We discuss the design and technical challenges for expanding the concept.

キーワード
gripmarks
grip recognition
tangible objects
mixed reality
著者
Qian Zhou
Facebook Reality Labs & University of British Columbia, Redmond, WA, USA
Sarah Sykes
Facebook Reality Labs, Redmond, WA, USA
Sidney Fels
University of British Columbia, Vancouver, BC, Canada
Kenrick Kin
Facebook Reality Labs, Redmond, WA, USA
DOI

10.1145/3313831.3376313

論文URL

https://doi.org/10.1145/3313831.3376313

動画

会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2020.acm.org/)

セッション: Mixed reality

Paper session
311 KAUA'I
5 件の発表
2020-04-28 18:00:00
2020-04-28 19:15:00
日本語まとめ
読み込み中…