Gripmarks: Using Hand Grips to Transform In-Hand Objects into Mixed Reality Input


We introduce Gripmarks, a system that enables users to opportunistically use objects they are already holding as input surfaces for mixed reality head-mounted displays (HMD). Leveraging handheld objects reduces the need for users to free up their hands or acquire a controller to interact with their HMD. Gripmarks associate a particular hand grip with the shape primitive of the physical object without the need of object recognition or instrumenting the object. From the grip pose and shape primitive we can infer the surface of the object. With an activation gesture, we can enable the object for use as input to the HMD. With five gripmarks we demonstrate a recognition rate of 94.2%; we show that our grip detection benefits from the physical constraints of holding an object. We explore two categories of input objects 1) tangible surfaces and 2) tangible tools and present two representative applications. We discuss the design and technical challenges for expanding the concept.

grip recognition
tangible objects
mixed reality
Qian Zhou
Facebook Reality Labs & University of British Columbia, Redmond, WA, USA
Sarah Sykes
Facebook Reality Labs, Redmond, WA, USA
Sidney Fels
University of British Columbia, Vancouver, BC, Canada
Kenrick Kin
Facebook Reality Labs, Redmond, WA, USA



会議: CHI 2020

The ACM CHI Conference on Human Factors in Computing Systems

セッション: Mixed reality

Paper session
311 KAUA'I
2020-04-28 18:00:00
2020-04-28 19:15:00