Providing tactile-feedback when users contact virtual-interfaces has been a seminal advance. However, we posit these advances have been explored in isolation from considerations of users’ physical interactions with surrounding-objects. Most touch-interfaces were designed to optimize virtual interfaces, but rarely consider that users also need to feel physical interfaces (e.g., tools, putting on/off headsets). We argue against this being the sole design-objective driving haptic-interfaces; instead, we propose also to optimize the fidelity of the real-world sensations that users feel while wearing a haptic device. We propose a framework to classify touch-devices by measuring not only their abilities to deliver virtual-feedback but also how much they impair physical-feedback—we argue this balancing act is an urgent mainstream need, given the success of Mixed-Reality. Thus, to accelerate the research in this area, we synthesize existing techniques into new conceptual-categories: feel-through, on-demand, relocated, and remote actuators. Finally, we present their pros/cons and discuss a possible roadmap.
We present SensoryBlox, a modular, multi-sensory user interface designed for integration with cardboard-based virtual reality (VR) head-mounted displays (HMDs). SensoryBlox features interchangeable sensory modules—vibration, temperature, wind, and olfactory—that enable users to assemble customized multi-sensory configurations tailored to diverse VR contexts. The system includes in-VR interfaces for module scanning, spatial tracking, and real-time customization of feedback patterns. To inform SensoryBlox design, we conducted three user studies. The initial study explored application scenarios and associated sensory modalities to identify design requirements for a modular multi-sensory VR system. Based on these findings, we developed the hardware modules and in-VR software interfaces. In the second study, we evaluated the usability and interaction experience of SensoryBlox across all functionalities. Finally, a comparison study examined the impact of multi-sensory feedback on user experience. Our findings demonstrate the potential of a modular multi-sensory system in enriching immersion and engaging interactions within low-cost VR environments.
Recent advances in muscle-computer interfaces (MCIs) have brought us closer to wearable EMG devices capable of accurate gesture recognition without the longstanding requirement for user-specific calibration data. However, much of this progress has relied on closed datasets, proprietary resources, and custom hardware, limiting accessibility for the broader research community. We take a step toward democratizing universal MCIs by showing that calibration-free gesture recognition can be achieved with open-source code, publicly available datasets, and commodity hardware. Using a 612-participant Myo Armband dataset to train foundational models, we demonstrate accurate cross-user performance for two real-time interaction tasks (inspired by recent closed-source state-of-the-art results): (1) 1D cursor control (mean acquisition time: 1.1 s) and (2) five-class discrete gesture recognition (error rate: 2% and response time: 1.0 s). For the first time, we contribute openly available calibration-free models and code for creating highly accurate MCIs, establishing a new foundation for future replication and extension.
Extended Reality (XR) systems for physical skill training have largely emphasized simulation rather than real-time in-situ instruction. We present WeldAR, an Augmented Reality (AR) system with five learning modules that overlays real-time guidance during live welding using a headset integrated into a welding helmet and a torch attachment. We conducted an in-situ within-subjects study with 24 novices, comparing AR guidance to video instruction for live welding across practice and unassisted tests. AR improved performance in both assisted practice and unassisted tests, primarily driven by gains in travel speed and work angle. By offering real-time feedback on four performance measures, AR supported novices in carrying embodied knowledge into independent tasks. Our contributions include: (1) WeldAR for in-situ physical skill training; (2) empirical evidence that AR enhances composite welding performance and key physical skills; and (3) implications for the development of AR systems that support in-situ, embodied skill training in welding and related trades.
We present a breath-driven odor display device that enables spatial odor perception in virtual reality, relying on users’ natural inhalation rather than pumps or fans. The device supports rapid concentration adjustment through two models: a continuous gradient (monotonic concentration change with position) and a plume (intermittent, fluctuating patterns resembling natural dispersal). To explore its potential, we conducted a proof-of-concept evaluation across four tasks: concentration discrimination, direction and distance localization, and integrated position searching. Results show that the device can dynamically modulate odor concentration for spatial olfactory perception. Our findings further reveal complementary strengths and limitations of the two models—gradients support stable, precise cues, whereas plumes better emulate natural variability. This work introduces a simple yet effective method for simulating spatial odor experiences in VR, offering a lightweight, energy-efficient pathway that expands the design space for olfactory interaction research in virtual environments.
This paper investigates the challenges of designing mixed-presence environments for Mixed Reality and suggests future research directions derived from an expert workshop. Developing mixed-presence systems is a complex undertaking that combines the intricacies of both co-located and distributed mixed-reality spaces. Current literature in this field describes various promising design and development approaches but lacks a systematic overview, resulting in fragmented solutions to re-occurring challenges. Therefore, we conducted a comprehensive review of mixed-presence and multi-user remote mixed-reality systems, categorizing the prevalent challenges faced during the development of such systems, but also current trends, common use cases, study tasks and methodologies. Supported by these results, we then conducted an expert ideation workshop to collect and structure promising future research directions. As a result, we provide a detailed resource to orient and prepare developers for probable challenges and support researchers in making informed design decisions for future mixed-presence studies in Mixed Reality.
Users of Virtual Reality (VR) primarily sense their environment through audiovisual cues. The lack of haptic feedback on their body can make them unaware of virtual obstacles outside their field of view. This lack of sensing can cause the user to unknowingly penetrate virtual objects, breaking the scene’s plausibility and disrupting the experience of other users in the same virtual space. We propose a haptic belt that increases the user’s scene awareness by rendering signals of collisions and proximity to virtual objects around the user. In a user study, we show that the belt improves spatial awareness both in a fast, high-stress scenario where the user's attention is limited and during a relaxed experience where the belt is the only source of information. The belt enables users to move closer to obstacles while reducing unintended collisions