Sound2Hap: Learning Audio-to-Vibrotactile Haptic Generation from Human Ratings

要旨

Environmental sounds like footsteps, keyboard typing, or dog barking carry rich information and emotional context, making them valuable for designing haptics in user applications. Existing audio-to-vibration methods, however, rely on signal-processing rules tuned for music or games and often fail to generalize across diverse sounds. To address this, we first investigated user perception of four existing audio-to-haptic algorithms, then created a data-driven model for environmental sounds. In Study 1, 34 participants rated vibrations generated by the four algorithms for 1,000 sounds, revealing no consistent algorithm preferences. Using this dataset, we trained Sound2Hap, a CNN-based autoencoder, to generate perceptually meaningful vibrations from diverse sounds with low latency. In Study 2, 15 participants rated its output higher than signal-processing baselines on both audio-vibration match and Haptic Experience Index (HXI), finding it more harmonious with diverse sounds. This work demonstrates a perceptually validated approach to audio-haptic translation, broadening the reach of sound-driven haptics.

受賞
Best Paper
著者
Yinan Li
Arizona State University, Tempe, Arizona, United States
Hasti Seifi
Arizona State University, Tempe, Arizona, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Haptic and Multisensory Feedback

P1 - Room 118
7 件の発表
2026-04-13 20:15:00
2026-04-13 21:45:00