Roomify: Spatially-Grounded Style Transformation for Immersive Virtual Environments

要旨

We present Roomify, a spatially-grounded transformation system that generates themed virtual environments anchored to users' physical rooms while maintaining spatial structure and functional semantics. Current VR approaches face a fundamental trade-off: full immersion sacrifices spatial awareness, while passthrough solutions break presence. Roomify addresses this through spatially-grounded transformation—treating physical spaces as "spatial containers'' that preserve key functional and geometric properties of furniture while enabling radical stylistic changes. Our pipeline combines in-situ 3D scene understanding, AI-driven spatial reasoning, and style-aware generation to create personalized virtual environments grounded in physical reality. We introduce a cross-reality authoring tool enabling fine-grained user control through MR editing and VR preview workflows. Two user studies validate our approach: one with 18 VR users demonstrates a 63% improvement in presence over passthrough and 26% over fully virtual baselines while maintaining spatial awareness; another with 8 design professionals confirms the system's creative expressiveness (scene quality: 5.95/7; creativity support: 6.08/7) and professional workflow value across diverse environments.

著者
Xueyang Wang
Tsinghua University, Beijing, China
Qinxuan Cen
Beijing University of Posts and Telecommunications, Beijing, China
Weitao Bi
Tsinghua University, Beijing, China
Yunxiang Ma
Tsinghua University, Beijing, China
Xin Yi
Tsinghua University, Beijing, China
Robert Xiao
University of British Columbia, Vancouver, British Columbia, Canada
Xinyi Fu
Tsinghua University, Beijing, China
Hewu Li
Tsinghua University, Beijing, China

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: XR and Environmental Adaptation/Integration

P1 - Room 118
7 件の発表
2026-04-17 20:15:00
2026-04-17 21:45:00