Gripmarks: Using Hand Grips to Transform In-Hand Objects into Mixed Reality Input
説明

We introduce Gripmarks, a system that enables users to opportunistically use objects they are already holding as input surfaces for mixed reality head-mounted displays (HMD). Leveraging handheld objects reduces the need for users to free up their hands or acquire a controller to interact with their HMD. Gripmarks associate a particular hand grip with the shape primitive of the physical object without the need of object recognition or instrumenting the object. From the grip pose and shape primitive we can infer the surface of the object. With an activation gesture, we can enable the object for use as input to the HMD. With five gripmarks we demonstrate a recognition rate of 94.2%; we show that our grip detection benefits from the physical constraints of holding an object. We explore two categories of input objects 1) tangible surfaces and 2) tangible tools and present two representative applications. We discuss the design and technical challenges for expanding the concept.

日本語まとめ
読み込み中…
読み込み中…
Embodied Axes: Tangible, Actuated Interaction for 3D Augmented Reality Data Spaces
説明

We present Embodied Axes, a controller which supports selection operations for 3D imagery and data visualisations in Augmented Reality. The device is an embodied representation of a 3D data space -- each of its three orthogonal arms corresponds to a data axis or domain specific frame of reference. Each axis is composed of a pair of tangible, actuated range sliders for precise data selection, and rotary encoding knobs for additional parameter tuning or menu navigation. The motor actuated sliders support alignment to positions of significant values within the data, or coordination with other input: e.g., mid-air gestures in the data space, touch gestures on the surface below the data, or another Embodied Axes device supporting multi-user scenarios. We conducted expert enquiries in medical imaging which provided formative feedback on domain tasks and refinements to the design. Additionally, a controlled user study was performed and found that the Embodied Axes was overall more accurate than conventional tracked controllers for selection tasks.

日本語まとめ
読み込み中…
読み込み中…
Body Follows Eye: Unobtrusive Posture Manipulation Through a Dynamic Content Position in Virtual Reality
説明

While virtual objects are likely to be a part of future interfaces, we lack knowledge of how the dynamic position of virtual objects influences users' posture. In this study, we investigated users' posture change following the unobtrusive and swift motions of a content window in virtual reality (VR). In two perception studies, we estimated the perception threshold on undetectable slow motions and displacement during an eye blink. In a formative study, we compared users' performance, posture change as well as subjective responses on unobtrusive, swift, and no motions. Based on the result, we designed concept applications and explored potential design space of moving virtual content for unobtrusive posture change. With our study, we discuss the interfaces that control users and the initial design guidelines of unobtrusive posture manipulation.

日本語まとめ
読み込み中…
読み込み中…
HiveFive: Immersion Preserving Attention Guidance in Virtual Reality
説明

Recent advances in Virtual Reality (VR) technology, such as larger fields of view, have made VR increasingly immersive. However, a larger field of view often results in a user focusing on certain directions and missing relevant content presented elsewhere on the screen. With HiveFive, we propose a technique that uses swarm motion to guide user attention in VR. The goal is to seamlessly integrate directional cues into the scene without losing immersiveness. We evaluate HiveFive in two studies. First, we compare biological motion (from a prerecorded swarm) with non-biological motion (from an algorithm), finding further evidence that humans can distinguish between these motion types and that, contrary to our hypothesis, non-biological swarm motion results in significantly faster response times. Second, we compare HiveFive to four other techniques and show that it not only results in fast response times but also has the smallest negative effect on immersion.

日本語まとめ
読み込み中…
読み込み中…
Heatmaps, Shadows, Bubbles, Rays: Comparing Mid-Air Pen Position Visualizations in Handheld AR
説明

In Handheld Augmented Reality, users look at AR scenes through the smartphone held in their hand. In this setting, having a mid-air pointing device like a pen in the other hand greatly expands the interaction possibilities. For example, it lets users create 3D sketches and models while on the go. However, perceptual issues in Handheld AR make it difficult to judge the distance of a virtual object, making it hard to align a pen to it. To address this, we designed and compared different visualizations of the pen's position in its virtual environment, measuring pointing precision, task time, activation patterns, and subjective ratings of helpfulness, confidence, and comprehensibility of each visualization. While all visualizations resulted in only minor differences in precision and task time, subjective ratings of perceived helpfulness and confidence favor a 'heatmap' technique that colors the objects in the scene based on their distance to the pen.

日本語まとめ
読み込み中…
読み込み中…