VR & physical real input

Paper session

会議の名前
CHI 2020
PneuModule: Using Inflatable Pin Arrays for Reconfigurable Physical Controls on Pressure-Sensitive Touch Surfaces
要旨

We present PneuModule, a tangible interface platform that enables users to reconfigure physical controls on pressure-sensitive touch surfaces using pneumatically-actuated inflatable pin arrays. PneuModule consists of a main module and extension modules. The main module is tracked on the touch surface and forwards continuous inputs from attached multiple extension modules to the touch surface. Extension modules have distinct mechanisms for user input, which pneumatically actuates the inflatable pins at the bottom of the main module through internal air pipes. The main module accepts multi-dimensional inputs since each pin is individually inflated by the corresponding air chamber. Also, since the extension modules are swappable and identifiable owing to the marker design, users can quickly customize the interface layout. We contribute to design details of inflatable pins and diverse pneumatic input control design examples for PneuModule. We also showcase the feasibility of PneuModule through a series of evaluations and interactive prototypes.

受賞
Honorable Mention
キーワード
Tangible user interfaces
pressure-sensitive touch surfaces
pneumatic actuation
reconfigurable physical controls
著者
Changyo Han
University of Tokyo, Tokyo, Japan
Ryo Takahashi
University of Tokyo, Tokyo, Japan
Yuchi Yahagi
University of Tokyo, Tokyo, Japan
Takeshi Naemura
University of Tokyo, Tokyo, Japan
DOI

10.1145/3313831.3376838

論文URL

https://doi.org/10.1145/3313831.3376838

動画
Venous Materials: Towards Interactive Fluidic Mechanisms
要旨

Venous Materials is a novel concept and approach of an interactive material utilizing fluidic channels. We present a design method for fluidic mechanisms that respond to deformation by mechanical inputs from the user, such as pressure and bending. We designed a set of primitive venous structures that act as embedded analog fluidic sensors, displaying flow and color change. In this paper, we consider the fluid as the medium to drive tangible information triggered by deformation, and at the same time, to function as a responsive display of that information. To provide users with a simple way to create and validate designs of fluidic structures, we built a software platform and design tool UI. This design tool allows users to quickly design the geometry, and simulate the flow with intended mechanical force dynamically. We present a range of applications that demonstrate how Venous Materials can be utilized to augment interactivity of everyday physical objects.

キーワード
Programmable Materials
Human Material Interactions
Microfluidics
著者
Hila Mor
Massachusetts Institute of Technology, Cambridge, MA, USA
Tianyu Yu
Tsinghua University, Beijing, China
Ken Nakagaki
Massachusetts Institute of Technology, Cambridge, MA, USA
Benjamin Harvey Miller
Massachusetts Institute of Technology, Cambridge, MA, USA
Yichen Jia
Massachusetts Institute of Technology, Cambridge, MA, USA
Hiroshi Ishii
Massachusetts Institute of Technology, Cambridge, MA, USA
DOI

10.1145/3313831.3376129

論文URL

https://doi.org/10.1145/3313831.3376129

動画
Get a Grip: Evaluating Grip Gestures for VR Input using a Lightweight Pen
要旨

The use of Virtual Reality (VR) in applications such as data analysis, artistic creation, and clinical settings requires high precision input. However, the current design of handheld controllers, where wrist rotation is the primary input approach, does not exploit the human fingers' capability for dexterous movements for high precision pointing and selection. To address this issue, we investigated the characteristics and potential of using a pen as a VR input device. We conducted two studies. The first examined which pen grip allowed the largest range of motion---we found a tripod grip at the rear end of the shaft met this criterion. The second study investigated target selection via 'poking' and ray-casting, where we found the pen grip outperformed the traditional wrist-based input in both cases. Finally, we demonstrate potential applications enabled by VR pen input and grip postures.

受賞
Honorable Mention
キーワード
Virtual Reality
pen input
finger and wrist dexterity
grip postures
handheld controller
spatial target selection
著者
Nianlong Li
Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China
Teng Han
Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China
Feng Tian
Institute of software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China
Jin Huang
Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Beijing, China
Minghui Sun
Jilin University, Changchun, China
Pourang Irani
University of Manitoba, Winnipeg, Canada
Jason Alexander
Lancaster University, Lancaster, Lancashire, United Kingdom
DOI

10.1145/3313831.3376698

論文URL

https://doi.org/10.1145/3313831.3376698

Head-Coupled Kinematic Template Matching: A Prediction Model for Ray Pointing in VR
要旨

This paper presents a new technique to predict the ray pointer landing position for selection movements in virtual reality (VR) environments. The technique adapts and extends a prior 2D kinematic template matching method to VR environments where ray pointers are used for selection. It builds on the insight that the kinematics of a controller and Head-Mounted Display (HMD) can be used to predict the ray's final landing position and angle. An initial study provides evidence that the motion of the head is a key input channel for improving prediction models. A second study validates this technique across a continuous range of distances, angles, and target sizes. On average, the technique's predictions were within 7.3° of the true landing position when 50% of the way through the movement and within 3.4° when 90%. Furthermore, compared to a direct extension of Kinematic Template Matching, which only uses controller movement, this head-coupled approach increases prediction accuracy by a factor of 1.8x when 40% of the way through the movement.

キーワード
Endpoint Prediction
Target Prediction
Virtual Reality
VR
Kinematics
Ray Pointing
Template Matching
著者
Rorik Henrikson
Chatham Labs, Toronto, ON, Canada
Tovi Grossman
University of Toronto, Toronto, ON, Canada
Sean Trowbridge
Facebook Reality Labs, Redmond, WA, USA
Daniel Wigdor
Chatham Labs & University of Toronto, Toronto, ON, Canada
Hrvoje Benko
Facebook Reality Labs, Redmond, WA, USA
DOI

10.1145/3313831.3376489

論文URL

https://doi.org/10.1145/3313831.3376489

動画
Outline Pursuits: Gaze-assisted Selection of Occluded Objects in Virtual Reality
要旨

In 3D environments, objects can be difficult to select when they overlap, as this affects available target area and increases selection ambiguity. We introduce Outline Pursuits which extends a primary pointing modality for gaze-assisted selection of occluded objects. Candidate targets within a pointing cone are presented with an outline that is traversed by a moving stimulus. This affords completion of the selection by gaze attention to the intended target's outline motion, detected by matching the user's smooth pursuit eye movement. We demonstrate two techniques implemented based on the concept, one with a controller as the primary pointer, and one in which Outline Pursuits are combined with head pointing for hands-free selection. Compared with conventional raycasting, the techniques require less movement for selection as users do not need to reposition themselves for a better line of sight, and selection time and accuracy are less affected when targets become highly occluded.

キーワード
Virtual reality, Occlusion, Eye tracking, Smooth pursuits
著者
Ludwig Sidenmark
Lancaster University, Lancaster, United Kingdom
Christopher Clarke
Lancaster University, Lancaster, United Kingdom
Xuesong Zhang
Katholieke Universiteit Leuven, Leuven, Belgium
Jenny Phu
Ludwig Maximilian University of Munich, Munich, Germany
Hans Gellersen
Aarhus University, Aarhus, Denmark
DOI

10.1145/3313831.3376438

論文URL

https://doi.org/10.1145/3313831.3376438

動画