We present PneuModule, a tangible interface platform that enables users to reconfigure physical controls on pressure-sensitive touch surfaces using pneumatically-actuated inflatable pin arrays. PneuModule consists of a main module and extension modules. The main module is tracked on the touch surface and forwards continuous inputs from attached multiple extension modules to the touch surface. Extension modules have distinct mechanisms for user input, which pneumatically actuates the inflatable pins at the bottom of the main module through internal air pipes. The main module accepts multi-dimensional inputs since each pin is individually inflated by the corresponding air chamber. Also, since the extension modules are swappable and identifiable owing to the marker design, users can quickly customize the interface layout. We contribute to design details of inflatable pins and diverse pneumatic input control design examples for PneuModule. We also showcase the feasibility of PneuModule through a series of evaluations and interactive prototypes.
https://doi.org/10.1145/3313831.3376838
Venous Materials is a novel concept and approach of an interactive material utilizing fluidic channels. We present a design method for fluidic mechanisms that respond to deformation by mechanical inputs from the user, such as pressure and bending. We designed a set of primitive venous structures that act as embedded analog fluidic sensors, displaying flow and color change. In this paper, we consider the fluid as the medium to drive tangible information triggered by deformation, and at the same time, to function as a responsive display of that information. To provide users with a simple way to create and validate designs of fluidic structures, we built a software platform and design tool UI. This design tool allows users to quickly design the geometry, and simulate the flow with intended mechanical force dynamically. We present a range of applications that demonstrate how Venous Materials can be utilized to augment interactivity of everyday physical objects.
https://doi.org/10.1145/3313831.3376129
The use of Virtual Reality (VR) in applications such as data analysis, artistic creation, and clinical settings requires high precision input. However, the current design of handheld controllers, where wrist rotation is the primary input approach, does not exploit the human fingers' capability for dexterous movements for high precision pointing and selection. To address this issue, we investigated the characteristics and potential of using a pen as a VR input device. We conducted two studies. The first examined which pen grip allowed the largest range of motion---we found a tripod grip at the rear end of the shaft met this criterion. The second study investigated target selection via 'poking' and ray-casting, where we found the pen grip outperformed the traditional wrist-based input in both cases. Finally, we demonstrate potential applications enabled by VR pen input and grip postures.
This paper presents a new technique to predict the ray pointer landing position for selection movements in virtual reality (VR) environments. The technique adapts and extends a prior 2D kinematic template matching method to VR environments where ray pointers are used for selection. It builds on the insight that the kinematics of a controller and Head-Mounted Display (HMD) can be used to predict the ray's final landing position and angle. An initial study provides evidence that the motion of the head is a key input channel for improving prediction models. A second study validates this technique across a continuous range of distances, angles, and target sizes. On average, the technique's predictions were within 7.3° of the true landing position when 50% of the way through the movement and within 3.4° when 90%. Furthermore, compared to a direct extension of Kinematic Template Matching, which only uses controller movement, this head-coupled approach increases prediction accuracy by a factor of 1.8x when 40% of the way through the movement.
https://doi.org/10.1145/3313831.3376489
In 3D environments, objects can be difficult to select when they overlap, as this affects available target area and increases selection ambiguity. We introduce Outline Pursuits which extends a primary pointing modality for gaze-assisted selection of occluded objects. Candidate targets within a pointing cone are presented with an outline that is traversed by a moving stimulus. This affords completion of the selection by gaze attention to the intended target's outline motion, detected by matching the user's smooth pursuit eye movement. We demonstrate two techniques implemented based on the concept, one with a controller as the primary pointer, and one in which Outline Pursuits are combined with head pointing for hands-free selection. Compared with conventional raycasting, the techniques require less movement for selection as users do not need to reposition themselves for a better line of sight, and selection time and accuracy are less affected when targets become highly occluded.
https://doi.org/10.1145/3313831.3376438