The advancement of Virtual Reality (VR) has expanded 2D user interfaces into 3D space. This change has introduced richer interaction modalities but also brought challenges, especially the lack of haptic feedback in mid-air interactions. Previous research has explored various methods to provide feedback for interface interactions, but most approaches require specialized haptic devices. We introduce haptic retargeting to enable users to control multiple virtual screens in VR using a simple flat pad, which serves as a single physical proxy to support seamless interaction across multiple virtual screens. We conducted user studies to explore the appropriate virtual screen size and positioning under our retargeting method and then compared various drag-and-drop methods for cross-screen interaction. Finally, we compared our method with controller-based interaction in application scenarios.
https://dl.acm.org/doi/10.1145/3706598.3713629
In contrast to design tools for graphics and audio generation from text prompts, haptic design tools lag behind due to challenges in constructing large-scale, high-quality datasets including vibrations and text descriptions. To address this gap, we propose ChatHAP, a conversational haptic system for designing vibrations. ChatHAP integrates various haptic design approaches using a large language model, including generating vibrations using signal parameters, navigating through libraries, and modifying existing vibrations. To further improve vibration navigation, we present an algorithm that adaptively learns user preferences for vibration features. A user study with novices (n=20) demonstrated that ChatHAP can serve as a practical design tool, and the proposed algorithm significantly reduced task completion time (38%), prompt quantity (25%), and verbosity (36%). The study found ChatHAP easy-to-use and identified requirements for chat-based haptic design as well as features for further improvement. Finally, we present key findings with ChatHAP and discuss implications for future work.
https://dl.acm.org/doi/10.1145/3706598.3713441
Tendon vibration can create movement illusions: vibrating the biceps tendon induces an illusion of extending the arm, while vibrating the triceps tendon induces an illusion of flexing the arm. However, it is unclear how to create and integrate such illusions shown in neuroscience to interaction techniques in virtual reality (VR). We first design a motor setup for tendon vibration. Study 1 validates that the setup induces movement illusions which on average create a 5.26 cm offset in active arm movements. Study 2 shows that tendon vibration improves the detection thresholds of visual motion gains often used in VR interaction techniques by 0.22. A model we developed in Study 2 predicts the effects of tendon vibration and is used in a biomechanical simulation to demonstrate the detection thresholds across typical reaching tasks in VR.
https://dl.acm.org/doi/10.1145/3706598.3714003
Advances in large language models (LLMs) empower new interactive capabilities for wearable voice interfaces, yet traditional voice-and-audio I/O techniques limit users' ability to flexibly navigate information and manage timing for complex conversational tasks. We developed a suite of gesture and audio-haptic guidance techniques that enable users to control conversation flows and maintain awareness of possible future actions, while simultaneously contributing and receiving conversation content through voice and audio. A 14-participant exploratory study compared our parallelized I/O techniques to a baseline of voice-only interaction. The results demonstrate the efficiency of gestures and haptics for information access, while allowing system speech to be redirected and interrupted in a socially acceptable manner. The techniques also raised user awareness of how to leverage intelligent capabilities. Our findings inform design recommendations to facilitate role-based collaboration between multimodal I/O techniques and reduce users' perception of time pressure when interleaving interactions with system speech.
https://dl.acm.org/doi/10.1145/3706598.3714310
In Virtual Reality (VR), rendering realistic forces is crucial for immersion, but traditional vibrotactile feedback fails to convey force sensations effectively. Studies of asymmetric vibrations that elicit pseudo forces show promise but are inherently tied to unwanted vibrations, reducing realism. Leveraging sensory attenuation to reduce the perceived intensity of self-generated vibrations during user movement, we present a novel algorithm that couples asymmetric vibrations with user motion, which mimics self-generated sensations. Our psychophysics study with 12 participants shows that motion-coupled asymmetric vibration attenuates the experience of vibration (equivalent to a \textasciitilde 30\% reduction in vibration-amplitude) while preserving the experience of force, compared to continuous asymmetric vibrations (state-of-the-art). We demonstrate the effectiveness of our approach in VR through three scenarios: shooting arrows, lifting weights, and simulating haptic magnets. Results revealed that participants preferred forces elicited by motion-coupled asymmetric vibration for tasks like shooting arrows and lifting weights. This research highlights the potential of motion-coupled asymmetric vibrations, offers new insights into sensory attenuation, and advances force rendering in VR.
https://dl.acm.org/doi/10.1145/3706598.3713358
Spatialized vibrotactile feedback systems deliver tactile information by placing multiple vibrotactile actuators on the body. As increasing numbers of actuators are required to adequately convey information in complicated applications, haptic designers find it difficult to create such systems due to limited scalability of existing toolkits. We propose VibraForge, an open-source vibrotactile toolkit that supports up to 128 vibrotactile actuators. Each actuator is encapsulated within a self-contained vibration unit and driven by its own microcontroller. By leveraging a chain-connection method, each unit receives independent vibration commands from a control unit, with fine-grained control over intensity and frequency. We also designed a GUI Editor to expedite the authoring of spatial vibrotactile patterns. Technical evaluation showed that vibration units reliably reproduced audio waveforms with low-latency and high-bandwidth data communication. Case studies of a phonemic tactile display, virtual reality fitness training, and drone teleoperation demonstrated the potential usage of VibraForge within different domains. A usability study with non-expert users highlighted the low technical barrier and customizability of the toolkit.
https://dl.acm.org/doi/10.1145/3706598.3714273
Designing vibrotactile experiences collaboratively requires communicating using multiple senses. This is challenging in remote scenarios as designers need to effectively express and communicate their intention while iteratively building and refining experiences, ideally in real-time. We formulate design considerations for collaborative haptic design tools, and propose CollabJam, a collaborative prototyping suite enabling remote synchronous design of vibrotactile experiences for on-body applications. We first outline CollabJam’s features and present a technical evaluation. Second, we use CollabJam to understand communication and design patterns used during haptic experience design. We performed an in-depth design evaluation spanning four sessions in which four pairs of participants designed and reviewed vibrotactile experiences remotely. A qualitative content analysis revealed how multi-sensory communication is essential to convey ideas, how stimulating the tactile sense can interfere with personal boundaries, and how freely placing actuators on the skin can provide both benefits and challenges.
https://dl.acm.org/doi/10.1145/3706598.3713469