Scene2Hap: Generating Scene-Wide Haptics for VR from Scene Context with Multimodal LLMs

要旨

Haptic feedback contributes to immersive virtual reality (VR) experiences. However, designing such feedback at scale for all objects within a VR scene remains time-consuming. We present Scene2Hap, an LLM-centered system that automatically designs object-level vibrotactile feedback for entire VR scenes based on the objects' semantic attributes and physical context. Scene2Hap employs a multimodal large language model to estimate each object’s semantics and physical context, including its material properties and vibration behavior, from multimodal information in the VR scene. These estimated attributes are then used to generate or retrieve audio signals, subsequently converted into plausible vibrotactile signals. For more realistic spatial haptic rendering, Scene2Hap estimates vibration propagation and attenuation from vibration sources to neighboring objects, considering the estimated material properties and spatial relationships of virtual objects in the scene. Three user studies confirm that Scene2Hap successfully estimates the vibration-related semantics and physical context of VR scenes and produces realistic vibrotactile signals.

受賞
Best Paper
著者
Arata Jingu
Saarland Informatics Campus, Saarbrücken, Germany
Easa AliAbbasi
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany
Sara Safaee
Saarland University and Max Planck Institute for Informatics, Saarbrücken, Saarland, Germany
Paul Strohmeier
Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbrücken, Germany
Jürgen Steimle
Saarland University, Saarland Informatics Campus, Saarbrücken, Germany

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Haptic and Multisensory Feedback

P1 - Room 118
7 件の発表
2026-04-13 20:15:00
2026-04-13 21:45:00