Mixed reality

Paper session

会議の名前
CHI 2020
Gripmarks: Using Hand Grips to Transform In-Hand Objects into Mixed Reality Input
要旨

We introduce Gripmarks, a system that enables users to opportunistically use objects they are already holding as input surfaces for mixed reality head-mounted displays (HMD). Leveraging handheld objects reduces the need for users to free up their hands or acquire a controller to interact with their HMD. Gripmarks associate a particular hand grip with the shape primitive of the physical object without the need of object recognition or instrumenting the object. From the grip pose and shape primitive we can infer the surface of the object. With an activation gesture, we can enable the object for use as input to the HMD. With five gripmarks we demonstrate a recognition rate of 94.2%; we show that our grip detection benefits from the physical constraints of holding an object. We explore two categories of input objects 1) tangible surfaces and 2) tangible tools and present two representative applications. We discuss the design and technical challenges for expanding the concept.

キーワード
gripmarks
grip recognition
tangible objects
mixed reality
著者
Qian Zhou
Facebook Reality Labs & University of British Columbia, Redmond, WA, USA
Sarah Sykes
Facebook Reality Labs, Redmond, WA, USA
Sidney Fels
University of British Columbia, Vancouver, BC, Canada
Kenrick Kin
Facebook Reality Labs, Redmond, WA, USA
DOI

10.1145/3313831.3376313

論文URL

https://doi.org/10.1145/3313831.3376313

動画
Embodied Axes: Tangible, Actuated Interaction for 3D Augmented Reality Data Spaces
要旨

We present Embodied Axes, a controller which supports selection operations for 3D imagery and data visualisations in Augmented Reality. The device is an embodied representation of a 3D data space -- each of its three orthogonal arms corresponds to a data axis or domain specific frame of reference. Each axis is composed of a pair of tangible, actuated range sliders for precise data selection, and rotary encoding knobs for additional parameter tuning or menu navigation. The motor actuated sliders support alignment to positions of significant values within the data, or coordination with other input: e.g., mid-air gestures in the data space, touch gestures on the surface below the data, or another Embodied Axes device supporting multi-user scenarios. We conducted expert enquiries in medical imaging which provided formative feedback on domain tasks and refinements to the design. Additionally, a controlled user study was performed and found that the Embodied Axes was overall more accurate than conventional tracked controllers for selection tasks.

キーワード
3D Visualisation
Device
Actuation
Tangible Interaction
Augmented Reality
著者
Maxime Cordeil
Monash University, Melbourne, VIC, Australia
Benjamin Bach
Edinburgh University, Edinburgh, United Kingdom
Andrew Cunningham
University of South Australia, Adelaide, Australia
Bastian Montoya
University of South Australia, Adelaide, Australia
Ross T. Smith
University of South Australia, Adelaide, Australia
Bruce H. Thomas
University of South Australia, Mawson Lakes, Australia
Tim Dwyer
Monash University, Melbourne, VIC, Australia
DOI

10.1145/3313831.3376613

論文URL

https://doi.org/10.1145/3313831.3376613

動画
Body Follows Eye: Unobtrusive Posture Manipulation Through a Dynamic Content Position in Virtual Reality
要旨

While virtual objects are likely to be a part of future interfaces, we lack knowledge of how the dynamic position of virtual objects influences users' posture. In this study, we investigated users' posture change following the unobtrusive and swift motions of a content window in virtual reality (VR). In two perception studies, we estimated the perception threshold on undetectable slow motions and displacement during an eye blink. In a formative study, we compared users' performance, posture change as well as subjective responses on unobtrusive, swift, and no motions. Based on the result, we designed concept applications and explored potential design space of moving virtual content for unobtrusive posture change. With our study, we discuss the interfaces that control users and the initial design guidelines of unobtrusive posture manipulation.

キーワード
Posture change
Unobtrusive interaction
Virtual Reality
著者
Joon Gi Shin
Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Doheon Kim
Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
Chaehan So
Yonsei University, Seoul, Republic of Korea
Daniel Saakes
Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
DOI

10.1145/3313831.3376794

論文URL

https://doi.org/10.1145/3313831.3376794

動画
HiveFive: Immersion Preserving Attention Guidance in Virtual Reality
要旨

Recent advances in Virtual Reality (VR) technology, such as larger fields of view, have made VR increasingly immersive. However, a larger field of view often results in a user focusing on certain directions and missing relevant content presented elsewhere on the screen. With HiveFive, we propose a technique that uses swarm motion to guide user attention in VR. The goal is to seamlessly integrate directional cues into the scene without losing immersiveness. We evaluate HiveFive in two studies. First, we compare biological motion (from a prerecorded swarm) with non-biological motion (from an algorithm), finding further evidence that humans can distinguish between these motion types and that, contrary to our hypothesis, non-biological swarm motion results in significantly faster response times. Second, we compare HiveFive to four other techniques and show that it not only results in fast response times but also has the smallest negative effect on immersion.

キーワード
attention guidance
virtual reality
immersion
eye-tracking
particle swarms
user studies
著者
Daniel Lange
University of Oldenburg, Oldenburg, Germany
Tim Claudius Stratmann
OFFIS - Institute for Information Technology, Oldenburg, Germany
Uwe Gruenefeld
OFFIS - Institute for Information Technology, Oldenburg, Germany
Susanne Boll
University of Oldenburg, Oldenburg, Germany
DOI

10.1145/3313831.3376803

論文URL

https://doi.org/10.1145/3313831.3376803

動画
Heatmaps, Shadows, Bubbles, Rays: Comparing Mid-Air Pen Position Visualizations in Handheld AR
要旨

In Handheld Augmented Reality, users look at AR scenes through the smartphone held in their hand. In this setting, having a mid-air pointing device like a pen in the other hand greatly expands the interaction possibilities. For example, it lets users create 3D sketches and models while on the go. However, perceptual issues in Handheld AR make it difficult to judge the distance of a virtual object, making it hard to align a pen to it. To address this, we designed and compared different visualizations of the pen's position in its virtual environment, measuring pointing precision, task time, activation patterns, and subjective ratings of helpfulness, confidence, and comprehensibility of each visualization. While all visualizations resulted in only minor differences in precision and task time, subjective ratings of perceived helpfulness and confidence favor a 'heatmap' technique that colors the objects in the scene based on their distance to the pen.

キーワード
Augmented Reality
mid-air
modeling
interaction
depth perception
smartphone
3D pen
depth cues
著者
Philipp Wacker
RWTH Aachen University, Aachen, Germany
Adrian Wagner
RWTH Aachen University, Aachen, Germany
Simon Voelker
RWTH Aachen University, Aachen, Germany
Jan Borchers
RWTH Aachen University, Aachen, Germany
DOI

10.1145/3313831.3376848

論文URL

https://doi.org/10.1145/3313831.3376848

動画