Mid-Air Gestures for Proactive Olfactory Interactions in Virtual Reality

要旨

Olfactory experiences are increasingly in demand due to their immersive benefits. However, most interaction implementations are passive and rely on conventions established for other modalities. In this work, we investigated proactive olfactory interactions, where users actively engage with scents, focusing on mid-air gestures as an input modality miming real-world object- and scent-manipulation, e.g., fanning away an odor. Our study had participants develop a user-defined gesture set for interacting with scents in Virtual Reality (VR), covering various object types (solid, liquid, gas) and interaction modes (out-of-reach, \revision{not graspable}, graspable), participants compared interacting with scents in VR using traditional controllers versus proactive gestures, revealing that proactive gestures enhanced user experience, presence, and task performance. Finally, an exploratory study showed strong participants' preferences for personalization, enhanced interaction capabilities, and multi-sensory integration. Based on these findings, we propose design guidelines and applications for proactive interactions with scents.

著者
Junxian Li
Zhejiang University, Hangzhou, Zhejiang, China
Yanan Wang
Donghua University, Shanghai , China
Zhitong Cui
Zhejiang University, Hangzhou, Zhejiang, China
Jas Brooks
University of Chicago, Chicago, Illinois, United States
Yifan Yan
Donghua University, Shanghai, China
Zhengyu Lou
Colledge of Fashion and Design, Shanghai, Shanghai, China
Yucheng Li
Donghua University, Shanghai, China
DOI

10.1145/3706598.3713964

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713964

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Immersive Touch and Gesture Interaction

G303
7 件の発表
2025-04-30 20:10:00
2025-04-30 21:40:00
日本語まとめ
読み込み中…