HRTF Estimation in the Wild

要旨

Head Related Transfer Functions (HRTFs) play a crucial role in creating immersive spatial audio experiences. However, HRTFs dif- fer significantly from person to person, and traditional methods for estimating personalized HRTFs are expensive, time-consuming, and require specialized equipment. We imagine a world where your personalized HRTF can be determined by capturing data through earbuds in everyday environments. In this paper, we propose a novel approach for deriving personalized HRTFs that only relies on in-the-wild binaural recordings and head tracking data. By ana- lyzing how sounds change as the user rotates their head through different environments with different noise sources, we can accu- rately estimate their personalized HRTF. Our results show that our predicted HRTFs closely match ground-truth HRTFs measured in an anechoic chamber. Furthermore, listening studies demonstrate that our personalized HRTFs significantly improve sound local- ization and reduce front-back confusion in virtual environments. Our approach offers an efficient and accessible method for deriving personalized HRTFs and has the potential to greatly improve spatial audio experiences.

著者
Vivek Jayaram
University of Washington, Seattle, Washington, United States
Ira Kemelmacher-Shlizerman
University of Washington , Seattle , Washington, United States
Steve Seitz
University of Washington, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3586183.3606782

動画

会議: UIST 2023

ACM Symposium on User Interface Software and Technology

セッション: Sensory Shenanigans: Immersion and Illusions in Mixed Reality

Venetian Room
6 件の発表
2023-11-01 18:00:00
2023-11-01 19:20:00