GestuProp: 3D Virtual Reality Prop Generation with Co-Speech Gestures

要旨

Virtual Reality (VR) has been widely adopted in domains such as gaming, education, and healthcare, where 3D props play a central role in enabling immersive interaction. With the advancement of generative AI, 3D props can now be created rapidly; however, little research has explored how gestures and speech can be integrated to support prop generation. To address this gap, we introduce GestuProp, a VR prop generation system driven by co-speech gestures. Building on a formative study with 30 participants, we proposed a gesture design space and developed the VR system GestuProp. We then conducted a user study with 14 participants, which showed that GestuProp demonstrates good usability and favorable user experiences, while also revealing how object categories influence gesture use and interaction. These findings highlight the potential of gesture–speech synergy to advance prop generation in VR.

著者
Zhihao Yao
Tsinghua University, Beijing, Beijing, China
Xiwen Yao
Tsinghua University, Beijing, China
Haowei Xiong
Tsinghua University, Beijing, China
Yuan-Ling Feng
Tsinghua University, Beijing, Haidian, China
Qirui Sun
Tsinghua University, Beijing, China
Yijie Guo
Tsinghua University, Beijing, China
Haipeng Mi
Tsinghua University, Beijing, China

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Novel Approaches to VR & Games: Neurodiversity, Tangibles, Wellbeing and more!

P1 - Room 128
7 件の発表
2026-04-16 20:15:00
2026-04-16 21:45:00