HapticGen: Generative Text-to-Vibration Model for Streamlining Haptic Design

要旨

Designing haptic effects is a complex, time-consuming process requiring specialized skills and tools. To support haptic design, we introduce HapticGen, a generative model designed to create vibrotactile signals from text inputs. We conducted a formative workshop to identify requirements for an AI-driven haptic model. Given the limited size of existing haptic datasets, we trained HapticGen on a large, labeled dataset of 335k audio samples using an automated audio-to-haptic conversion method. Expert haptic designers then used HapticGen's integrated interface to prompt and rate signals, creating a haptic-specific preference dataset for fine-tuning. We evaluated the fine-tuned HapticGen with 32 users, qualitatively and quantitatively, in an A/B comparison against a baseline text-to-audio model with audio-to-haptic conversion. Results show significant improvements in five haptic experience (e.g., realism) and system usability factors (e.g., future use). Qualitative feedback indicates HapticGen streamlines the ideation process for designers and helps generate diverse, nuanced vibrations.

著者
Youjin Sung
KAIST, Daejeon, Korea, Republic of
Kevin John
Arizona State University, Tempe, Arizona, United States
Sang Ho Yoon
KAIST, Daejeon, Korea, Republic of
Hasti Seifi
Arizona State University, Tempe, Arizona, United States
DOI

10.1145/3706598.3713609

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713609

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Haptic Interactions

G402
7 件の発表
2025-04-28 23:10:00
2025-04-29 00:40:00
日本語まとめ
読み込み中…