MoXaRt: Audio-Visual Object-Guided Sound Interaction for XR

要旨

In Extended Reality (XR), complex acoustic environments often overwhelm users, compromising both scene awareness and social engagement due to entangled sound sources. We introduce MoXaRt, a real-time XR system that uses audio-visual cues to separate these sources and enable fine-grained sound interaction. MoXaRt's core is a cascaded architecture that performs coarse, audio-only separation in parallel with visual detection of sources (e.g., faces, instruments). These visual anchors then guide refinement networks to isolate individual sources, separating complex mixes of up to 5 concurrent sources (e.g., 2 voices + 3 instruments) with ~2 second processing latency. We validate MoXaRt through a technical evaluation on a new dataset of 30 one-minute recordings featuring concurrent speech and music, and a 22-participant user study. Empirical results indicate that our system significantly enhances speech intelligibility, yielding a 36.2% (p < 0.01) increase in listening comprehension within adversarial acoustic environments while substantially reducing cognitive load (p < 0.001), thereby paving the way for more perceptive and socially adept XR experiences.

著者
Tianyu Xu
Google, Mountain View, California, United States
Qianhui Zheng
University of Michigan, Ann Arbor, Michigan, United States
Sieun Kim
University of Michigan, Ann Arbor, Michigan, United States
Ruoyu Xu
Columbia University, nyc, New York, United States
Tejasvi Ravi
Google , San Francisco, California, United States
Anuva Kulkarni
Google, Mountain View, California, United States
Katrina Passarella-Ward
Google, San Francisco, California, United States
Junyi Zhu
University of Michigan, Ann Arbor, Michigan, United States
Adarsh Kowdle
Google, San Francisco, California, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Mixed-Reality Systems for Spatial Understanding and Navigation

P1 - Room 128
6 件の発表
2026-04-17 18:00:00
2026-04-17 19:30:00