Embodied Interaction and Wearables

会議の名前
CHI 2026
Next Generation Wearable Haptics Should Balance Virtual & Real-world Fidelity
要旨

Providing tactile-feedback when users contact virtual-interfaces has been a seminal advance. However, we posit these advances have been explored in isolation from considerations of users’ physical interactions with surrounding-objects. Most touch-interfaces were designed to optimize virtual interfaces, but rarely consider that users also need to feel physical interfaces (e.g., tools, putting on/off headsets). We argue against this being the sole design-objective driving haptic-interfaces; instead, we propose also to optimize the fidelity of the real-world sensations that users feel while wearing a haptic device. We propose a framework to classify touch-devices by measuring not only their abilities to deliver virtual-feedback but also how much they impair physical-feedback—we argue this balancing act is an urgent mainstream need, given the success of Mixed-Reality. Thus, to accelerate the research in this area, we synthesize existing techniques into new conceptual-categories: feel-through, on-demand, relocated, and remote actuators. Finally, we present their pros/cons and discuss a possible roadmap.

著者
Shan-Yuan Teng
National Taiwan University, Taipei, Taiwan
Yudai Tanaka
University of Chicago, Chicago, Illinois, United States
Alex Mazursky
University of Chicago, Chicago, Illinois, United States
Pedro Lopes
University of Chicago, Chicago, Illinois, United States
動画
SensoryBlox: Plug-and-Feel Modular Multi-Sensory User Interface for Immersive Cardboard VR
要旨

We present SensoryBlox, a modular, multi-sensory user interface designed for integration with cardboard-based virtual reality (VR) head-mounted displays (HMDs). SensoryBlox features interchangeable sensory modules—vibration, temperature, wind, and olfactory—that enable users to assemble customized multi-sensory configurations tailored to diverse VR contexts. The system includes in-VR interfaces for module scanning, spatial tracking, and real-time customization of feedback patterns. To inform SensoryBlox design, we conducted three user studies. The initial study explored application scenarios and associated sensory modalities to identify design requirements for a modular multi-sensory VR system. Based on these findings, we developed the hardware modules and in-VR software interfaces. In the second study, we evaluated the usability and interaction experience of SensoryBlox across all functionalities. Finally, a comparison study examined the impact of multi-sensory feedback on user experience. Our findings demonstrate the potential of a modular multi-sensory system in enriching immersion and engaging interactions within low-cost VR environments.

著者
Hyunjae Gil
The University of Texas at Dallas, Richardson, Texas, United States
Abbas Khawaja
The University of Texas at Dallas, Richardson, Texas, United States
Ben Cressman
University of Texas at Dallas, Richardson, Texas, United States
Andrew Gerungan
University of Texas at Dallas, Richardson, Texas, United States
Jin Ryong Kim
University of Texas at Dallas, Richardson, Texas, United States
Open, Accurate, and Calibration-Free Muscle-Computer Interfaces
要旨

Recent advances in muscle-computer interfaces (MCIs) have brought us closer to wearable EMG devices capable of accurate gesture recognition without the longstanding requirement for user-specific calibration data. However, much of this progress has relied on closed datasets, proprietary resources, and custom hardware, limiting accessibility for the broader research community. We take a step toward democratizing universal MCIs by showing that calibration-free gesture recognition can be achieved with open-source code, publicly available datasets, and commodity hardware. Using a 612-participant Myo Armband dataset to train foundational models, we demonstrate accurate cross-user performance for two real-time interaction tasks (inspired by recent closed-source state-of-the-art results): (1) 1D cursor control (mean acquisition time: 1.1 s) and (2) five-class discrete gesture recognition (error rate: 2% and response time: 1.0 s). For the first time, we contribute openly available calibration-free models and code for creating highly accurate MCIs, establishing a new foundation for future replication and extension.

著者
Ethan Eddy
University of New Brunswick, Fredericton, New Brunswick, Canada
Evan Campbell
University of New Brunswick, Fredericton, New Brunswick, Canada
Erik J. Scheme
University of New Brunswick, Fredericton, New Brunswick, Canada
Scott Bateman
University of New Brunswick, Fredericton, New Brunswick, Canada
WELDAR: Augmenting Live Hands-On Training with In-Situ Guidance for Novice Learners
要旨

Extended Reality (XR) systems for physical skill training have largely emphasized simulation rather than real-time in-situ instruction. We present WeldAR, an Augmented Reality (AR) system with five learning modules that overlays real-time guidance during live welding using a headset integrated into a welding helmet and a torch attachment. We conducted an in-situ within-subjects study with 24 novices, comparing AR guidance to video instruction for live welding across practice and unassisted tests. AR improved performance in both assisted practice and unassisted tests, primarily driven by gains in travel speed and work angle. By offering real-time feedback on four performance measures, AR supported novices in carrying embodied knowledge into independent tasks. Our contributions include: (1) WeldAR for in-situ physical skill training; (2) empirical evidence that AR enhances composite welding performance and key physical skills; and (3) implications for the development of AR systems that support in-situ, embodied skill training in welding and related trades.

受賞
Honorable Mention
著者
Chuhan(Franklin) Xu
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Lia Sparingga. Purnamasari
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Zhenfang Chen
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Daragh Byrne
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Dina EL-Zanfaly
Carnegie Mellon University, PITTSBURGH , Pennsylvania, United States
BernO: A Breath-Driven Odor Display for Spatial Olfactory Interaction in VR
要旨

We present a breath-driven odor display device that enables spatial odor perception in virtual reality, relying on users’ natural inhalation rather than pumps or fans. The device supports rapid concentration adjustment through two models: a continuous gradient (monotonic concentration change with position) and a plume (intermittent, fluctuating patterns resembling natural dispersal). To explore its potential, we conducted a proof-of-concept evaluation across four tasks: concentration discrimination, direction and distance localization, and integrated position searching. Results show that the device can dynamically modulate odor concentration for spatial olfactory perception. Our findings further reveal complementary strengths and limitations of the two models—gradients support stable, precise cues, whereas plumes better emulate natural variability. This work introduces a simple yet effective method for simulating spatial odor experiences in VR, offering a lightweight, energy-efficient pathway that expands the design space for olfactory interaction research in virtual environments.

著者
Yu Zhang
Tsinghua University, Beijing, China
Chih-Hung Lee
Tsinghua University, Beijing, China
Jingtong Cai
The Future Laboratory,Tsinghua University, Building A, Shuangqing Complex, Haidian District, Beijing, China, China
Yaqing Hou
Tsinghua University, Beijing, China
Siqi Zheng
Tsinghua University, Beijing, China
Qianyao Xu
Tsinghua University, Beijing, China
Qi Lu
Tsinghua University, Beijing, China
Mixed Presence in Mixed Reality: Charting the Challenges and Opportunities
要旨

This paper investigates the challenges of designing mixed-presence environments for Mixed Reality and suggests future research directions derived from an expert workshop. Developing mixed-presence systems is a complex undertaking that combines the intricacies of both co-located and distributed mixed-reality spaces. Current literature in this field describes various promising design and development approaches but lacks a systematic overview, resulting in fragmented solutions to re-occurring challenges. Therefore, we conducted a comprehensive review of mixed-presence and multi-user remote mixed-reality systems, categorizing the prevalent challenges faced during the development of such systems, but also current trends, common use cases, study tasks and methodologies. Supported by these results, we then conducted an expert ideation workshop to collect and structure promising future research directions. As a result, we provide a detailed resource to orient and prepare developers for probable challenges and support researchers in making informed design decisions for future mixed-presence studies in Mixed Reality.

著者
Katja Krug
TUD Dresden University of Technology, Dresden, Germany
Wolfgang Büschel
University of Stuttgart, Stuttgart, Germany
Marc Satkowski
Fraunhofer Institute for Process Engineering and Packaging IVV, Dresden, Germany
Stefan Gumhold
TUD Dresden University of Technology, Dresden, Germany
Raimund Dachselt
TUD Dresden University of Technology, Dresden, Germany
動画
Belt and whistles - adding lower body collision awareness for MR experiences
要旨

Users of Virtual Reality (VR) primarily sense their environment through audiovisual cues. The lack of haptic feedback on their body can make them unaware of virtual obstacles outside their field of view. This lack of sensing can cause the user to unknowingly penetrate virtual objects, breaking the scene’s plausibility and disrupting the experience of other users in the same virtual space. We propose a haptic belt that increases the user’s scene awareness by rendering signals of collisions and proximity to virtual objects around the user. In a user study, we show that the belt improves spatial awareness both in a fast, high-stress scenario where the user's attention is limited and during a relaxed experience where the belt is the only source of information. The belt enables users to move closer to obstacles while reducing unintended collisions

著者
Diar Karim
University of Birmingham, Birmingham, United Kingdom
Devika Mukherjee
University of Birmingham, Birmingham, United Kingdom
Daniele Giunchi
University of Birmingham, Birmingham, United Kingdom
Massimiliano Di Luca
University of Birmingham, Birmingham, United Kingdom
Dr. Eyal Ofek
University of Birmingham, Birmingham, United Kingdom