この勉強会は終了しました。ご参加ありがとうございました。
High-fidelity driving simulators can act as testbeds for designing in-vehicle interfaces or validating the safety of novel driver assistance features. In this system paper, we develop and validate the safety of a mixed reality driving simulator system that enables us to superimpose virtual objects and events into the view of participants engaging in real-world driving in unmodified vehicles. To this end, we have validated the mixed reality system for basic driver cockpit and low-speed driving tasks, comparing the use of the system with non-headset and with the headset driving conditions, to ensure that participants behave and perform similarly using this system as they would otherwise.
This paper outlines the operational procedures and protocols for using such systems for cockpit tasks (like using the parking brake, reading the instrument panel, and turn signaling) as well as basic low-speed driving exercises (such as steering around corners, weaving around obstacles, and stopping at a fixed line) in ways that are safe, effective, and lead to accurate, repeatable data collection about behavioral responses in real-world driving tasks.
We present a new and practical method for capturing user body pose in virtual reality experiences: integrating cameras into handheld controllers, where batteries, computation and wireless communication already exist. By virtue of the hands operating in front of the user during many VR interactions, our controller-borne cameras can capture a superior view of the body for digitization. Our pipeline composites multiple camera views together, performs 3D body pose estimation, uses this data to control a rigged human model with inverse kinematics, and exposes the resulting user avatar to end user applications. We developed a series of demo applications illustrating the potential of our approach and more leg-centric interactions, such as balancing games and kicking soccer balls. We describe our proof-of-concept hardware and software, as well as results from our user study, which point to imminent feasibility.
This paper presents Centaur, an input system that enables tangible interaction on displays, e.g., untouchable computer monitors. Centaur’s tangibles are built from low-cost optical mouse sensors, or can alternatively be emulated by commercial optical mice already available. They are trackable when put on the display, rendering a real-time and high-precision tangible interface. Even for ordinary personal computers, enabling Centaur requires no new hardware and installation burden. Centaur’s cost-effectiveness and wide availability open up new opportunities for tangible user interface (TUI) users and practitioners. Centaur’s key innovation lies in its tracking method. It embeds high-frequency light signals into different portions of the display content as location beacons. When the tangibles are put on the screen, they are able to sense the light signals with their optical mouse sensors, and thus determine the locations accordingly. We develop four applications to showcase the potential usage of Centaur.
Conductive thread is a common material in e-textile toolkits that allows practitioners to create connections between electronic components sewn on fabric. When powered, conductive threads are used as resistive heaters to activate thermochromic dyes or pigments on textiles to create interactive, aesthetic, and ambient textile displays.
In this work, we introduce Embr, a creative framework for supporting hand-embroidered liquid crystal textile displays (LCTDs). This framework includes a characterization of conductive embroidery stitches, an expanded repertoire of thermal formgiving techniques,
and a thread modeling tool used to simulate mechanical, thermal, and electrical behaviors of LCTDs. Through exemplar artifacts, we annotate a morphological design space of LCTDs and discuss the tensions and opportunities of satisfying the wider range of electrical, craft, cultural, aesthetic, and functional concerns inherent to e-textile practices.
Online synchronous tutoring allows for immediate engagement between instructors and audiences over distance. However, tutoring physical skills remains challenging because current telepresence approaches may not allow for adequate spatial awareness, viewpoint control of the demonstration activities scattered across an entire work area, and the instructor's sufficient awareness of the audience. We present Asteroids, a novel approach for tangible robotic telepresence, to enable workbench-scale physical embodiments of remote people and tangible interactions by the instructor. With Asteroids, the audience can actively control a swarm of mini-telepresence robots, change camera positions, and switch to other robots' viewpoints. Demonstrators can perceive the audiences' physical presence while using tangible manipulations to control the audience's viewpoints and presentation flow. We conducted an exploratory evaluation for Asteroids with 12 remote participants in a model-making tutorial scenario with an architectural expert demonstrator. Results suggest our unique features benefitted participants' engagement, sense of presence, and understanding.