Let's get Physical

会議の名前
CHI 2022
XR-OOM: MiXed Reality driving simulation with real cars for research and design
要旨

High-fidelity driving simulators can act as testbeds for designing in-vehicle interfaces or validating the safety of novel driver assistance features. In this system paper, we develop and validate the safety of a mixed reality driving simulator system that enables us to superimpose virtual objects and events into the view of participants engaging in real-world driving in unmodified vehicles. To this end, we have validated the mixed reality system for basic driver cockpit and low-speed driving tasks, comparing the use of the system with non-headset and with the headset driving conditions, to ensure that participants behave and perform similarly using this system as they would otherwise. This paper outlines the operational procedures and protocols for using such systems for cockpit tasks (like using the parking brake, reading the instrument panel, and turn signaling) as well as basic low-speed driving exercises (such as steering around corners, weaving around obstacles, and stopping at a fixed line) in ways that are safe, effective, and lead to accurate, repeatable data collection about behavioral responses in real-world driving tasks.

著者
David Goedicke
Cornell Tech, New York, New York, United States
Alexandra W.D.. Bremers
Cornell Tech, New York, New York, United States
Sam Lee
Cornell Tech, New York, New York, United States
Fanjun Bu
Cornell Tech, Ithaca, New York, United States
Hiroshi Yasuda
Toyota Research Institute, Los Altos, California, United States
Wendy Ju
Cornell Tech, New York, New York, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517704

動画
ControllerPose: Inside-Out Body Capture with VR Controller Cameras
要旨

We present a new and practical method for capturing user body pose in virtual reality experiences: integrating cameras into handheld controllers, where batteries, computation and wireless communication already exist. By virtue of the hands operating in front of the user during many VR interactions, our controller-borne cameras can capture a superior view of the body for digitization. Our pipeline composites multiple camera views together, performs 3D body pose estimation, uses this data to control a rigged human model with inverse kinematics, and exposes the resulting user avatar to end user applications. We developed a series of demo applications illustrating the potential of our approach and more leg-centric interactions, such as balancing games and kicking soccer balls. We describe our proof-of-concept hardware and software, as well as results from our user study, which point to imminent feasibility.

著者
Karan Ahuja
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Vivian Shen
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Cathy Mengying Fang
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Nathan Riopelle
Human-Computer Interaction Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Andy Kong
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Chris Harrison
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502105

動画
Enabling Tangible Interaction on Non-touch Displays with Optical Mouse Sensor and Visible Light Communication
要旨

This paper presents Centaur, an input system that enables tangible interaction on displays, e.g., untouchable computer monitors. Centaur’s tangibles are built from low-cost optical mouse sensors, or can alternatively be emulated by commercial optical mice already available. They are trackable when put on the display, rendering a real-time and high-precision tangible interface. Even for ordinary personal computers, enabling Centaur requires no new hardware and installation burden. Centaur’s cost-effectiveness and wide availability open up new opportunities for tangible user interface (TUI) users and practitioners. Centaur’s key innovation lies in its tracking method. It embeds high-frequency light signals into different portions of the display content as location beacons. When the tangibles are put on the screen, they are able to sense the light signals with their optical mouse sensors, and thus determine the locations accordingly. We develop four applications to showcase the potential usage of Centaur.

著者
Yihui Yan
ShanghaiTech University, Shanghai, China
Zezhe Huang
ShanghaiTech University, Shanghai, China
Feiyang Xudu
ShanghaiTech University, Shanghai, China
Zhice Yang
ShanghaiTech University, Shanghai, China
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517666

動画
Embr: A Creative Framework for Hand Embroidered Liquid Crystal Textile Displays
要旨

Conductive thread is a common material in e-textile toolkits that allows practitioners to create connections between electronic components sewn on fabric. When powered, conductive threads are used as resistive heaters to activate thermochromic dyes or pigments on textiles to create interactive, aesthetic, and ambient textile displays. In this work, we introduce Embr, a creative framework for supporting hand-embroidered liquid crystal textile displays (LCTDs). This framework includes a characterization of conductive embroidery stitches, an expanded repertoire of thermal formgiving techniques, and a thread modeling tool used to simulate mechanical, thermal, and electrical behaviors of LCTDs. Through exemplar artifacts, we annotate a morphological design space of LCTDs and discuss the tensions and opportunities of satisfying the wider range of electrical, craft, cultural, aesthetic, and functional concerns inherent to e-textile practices.

著者
Shreyosi Endow
University of Texas at Arlington, Arlington, Texas, United States
Mohammad Abu Nasir Rakib
University of Texas at Arlington, Arlington, Texas, United States
Anvay Srivastava
University of Texas at Arlington, PLANO, Texas, United States
Sara Rastegarpouyani
Savannah College of Art and Design, savannah, Georgia, United States
Cesar Torres
UT Arlington, Arlington, Texas, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502117

動画
ASTEROIDS: Exploring Swarms of Mini-Telepresence Robots for Physical Skill Demonstration
要旨

Online synchronous tutoring allows for immediate engagement between instructors and audiences over distance. However, tutoring physical skills remains challenging because current telepresence approaches may not allow for adequate spatial awareness, viewpoint control of the demonstration activities scattered across an entire work area, and the instructor's sufficient awareness of the audience. We present Asteroids, a novel approach for tangible robotic telepresence, to enable workbench-scale physical embodiments of remote people and tangible interactions by the instructor. With Asteroids, the audience can actively control a swarm of mini-telepresence robots, change camera positions, and switch to other robots' viewpoints. Demonstrators can perceive the audiences' physical presence while using tangible manipulations to control the audience's viewpoints and presentation flow. We conducted an exploratory evaluation for Asteroids with 12 remote participants in a model-making tutorial scenario with an architectural expert demonstrator. Results suggest our unique features benefitted participants' engagement, sense of presence, and understanding.

著者
Jiannan Li
University of Toronto, Toronto, Ontario, Canada
Mauricio Sousa
University of Toronto, Toronto, Ontario, Canada
Chu Li
University of Toronto, Toronto, Ontario, Canada
Jessie Liu
University of Toronto, Toronto, Ontario, Canada
Yan Chen
University of Toronto, Toronto, Ontario, Canada
Ravin Balakrishnan
University of Toronto, Toronto, Ontario, Canada
Tovi Grossman
University of Toronto, Toronto, Ontario, Canada
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501927

動画