Co-Designing Multimodal Systems for Accessible Asynchronous Dance Instruction

要旨

Videos make exercise instruction widely available, but they rely on visual demonstrations that blind and low vision (BLV) learners cannot see. While audio descriptions (AD) can make videos accessible, describing movements remains challenging as the AD must convey what to do (mechanics, location, orientation) and how to do it (speed, fluidity, timing). Prior work thus used multimodal instruction to support BLV learners with individual simple movements. However, it is unclear how these approaches scale to dance instruction with unique, complex movements and precise timing constraints. To inform accessible remote dance instruction systems, we conducted three co-design workshops (N=28) with BLV dancers, instructors, and experts in sound, haptics, and AD. Participants designed 8 systems revealing common themes: staged learning to dissect routines, crafting vocabularies for movements, and selectively using modalities—narration for movement structure, sound for expression, and haptics for spatial cues. We conclude with design implications to make learning dance accessible.

著者
Ujjaini Das
University of Texas, Austin, Austin, Texas, United States
Shreya Kappala
University of Texas at Austin, Austin, Texas, United States
Meng Chen
University of California, Berkeley, Berkeley, California, United States
Mina Huh
University of Texas, Austin, Austin, Texas, United States
Amy Pavel
University of California, Berkeley, Berkeley, California, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Sound, Music, and Dance Accessibility

P1 - Room 120
7 件の発表
2026-04-15 20:15:00
2026-04-15 21:45:00