Visual perspective is a crucial design factor in Virtual Reality (VR). Especially when complex motor tasks are involved, it can affect both objective performance and subjective experience. We compared four visual perspectives (First-Person view, translucent Ghost view, Third-Person view, and Hybrid view) in a user study (N=20) involving different difficulties in a balancing game. Our findings reveal complex tradeoffs between the sense of embodiment, performance, and preference: The preferred Hybrid perspective offered a significant stability advantage for low task difficulty. However, this benefit vanished with increasing physical demand, revealing a speed-accuracy trade-off where external views required longer completion times. Ego-centric perspectives (First and Ghost) induced a stronger sense of embodiment and presence, but were less preferred. Participants' choice was not determined by representational fidelity but by pragmatic considerations of perceived utility. As perceived effectiveness can overrule objective performance and subjective experience, the choice of perspective is an important factor for future training and rehabilitation applications in VR.
Imagine placing your smartphone on a table in a noisy restaurant and clearly capturing the voices of friends seated around you, or recording a lecturer’s voice with clarity in a reverberant auditorium. We introduce SonicSieve, the first intelligent directional speech extraction system for smartphones using a bio-inspired acoustic microstructure. Our passive design embeds directional cues onto incoming speech without any additional electronics. It attaches to the in-line mic of low-cost wired earphones which can be attached to smartphones. We present an end-to-end neural network that processes the raw audio mixtures in real-time on mobile devices. Our results show that SonicSieve achieves a signal quality improvement of 5.0~dB when focusing on a 30° angular region. Additionally, the performance of our system based on only two microphones exceeds that of conventional 5-microphone arrays.
Facial interaction provides a safe, hands-free input method for cyclists. However, existing wearable facial gesture recognition suffers from severe interference in real-world conditions such as lighting, vibration, sweat, noise, and temperature changes. We present MagFace, an interference-resistant recognition system for cycling glasses using energy-efficient magnetic sensing. MagFace employs four pairs of magnetic silicone and magnetometers on the frame to capture subtle facial skin movements, operating at 30 Hz with a peak power of 150 mW. A tailored deep learning pipeline effectively learns magnetic signals for gesture classification. An evaluation (N=15) shows that MagFace required only one minute of training data to recognize six gestures across different cycling scenarios with high accuracy. A controlled conditions evaluation (N=8) shows MagFace's robustness against strong lighting, wind, bumpy roads, and uphills. Finally, an in-the-wild evaluation (N=14) shows the stable performance of MagFace's real-time system and demonstrates promising usability of MagFace.
We present Xspine, a design and fabrication method for creating motion-capable, self-sensing structures using multi-material FDM 3D printing with conductive filaments. Our method embeds compliant mechanisms and circuits directly into geometries, enabling the detection of large deformations in a single, assembly-free print. Specifically, we design printable components and circuit layouts aligned with the layer-by-layer nature of FDM 3D printing. Furthermore, we explore physical and digital augmentation strategies to enhance the interactive potential of the structures. To simplify the workflow, we develop an interactive design tool that allows users to configure motion behaviors, preview structural responses, and generate printable circuits. Finally, we demonstrate several application examples that highlight the potential of Xspine for customizable and interactive 3D-printed devices.
We propose a method to fabricate objects composed of 3D printed flattened pieces with integrated zipper-like structures. The object is manually assembled into a 3D shape by connecting the zipper components. By employing a zipper design that allows for angle-independent connections between patches, our method enables both the surface and zipper components to be printed in the same orientation, resulting in high-quality reconstruction of the input model with a faster 3D printing process that wastes less material. We implement a fully automated pipeline that takes a 3D model as input, converts it into developable patches, generates the zipper structures, and flattens them for subsequent 3D printing. We demonstrate that our approach significantly reduces the fabrication time and support material consumption. We also present application examples that highlight the versatility of our method.
Researchers can build craft-aligned digital fabrication technologies by designing interfaces inspired by craft tools. This process often demands real-time physical interactions not supported by today’s automation-focused CNC control systems. We theorize we can lower engineering challenges for craft-aligned CNC prototyping by allowing designers to modify existing CNCs to support both automated and real-time control. We contribute a new creative motion control system, Stepdance, which consists of two elements: 1) modular controllers that replace the G-code controller of a CNC and can be chained together to develop new interfaces, and 2) a modular programming library that supports declarative mappings between live user input, pre-programmed operations, and machine motion. We developed Stepdance with practitioners at the Haystack Mountain School of Craft, where we used the system to modify commercial plotters and 3D printers. We analyze the resulting artifacts, interactions, and ideas to discuss how Stepdance can broaden the practice of CNC design via physical metaphor.
We introduce a knitted inductive flex sensor which seamlessly integrates a coil and a capacitor into a soft and flexible tubular knit. By knitting enameled copper wires, we form a self-supporting coil, whose inductance changes with stretching and bending. Knitting both a coil and a parallel-wire capacitor, we create a textile resonant LC circuit, while preserving the softness, elasticity, and breathability of knitted textiles. In this paper, we present the fabrication process using an industrial knitting machine, evaluate sensor sensitivity and hysteresis over 100 bending cycles, and demonstrate the sensors versatility across joints of different radii. Our results show that knitted inductive sensors combine the wearability of soft textiles with the stability of inductive sensing, opening new sensing opportunities in healthcare, rehabilitation, and interactive electronic garments.