Supporting Accessible Data Visualization Through Audio Data Narratives

要旨

Online data visualizations play an important role in informing public opinion but are often inaccessible to screen reader users. To address the need for accessible data representations on the web that provide direct, multimodal, and up-to-date access to the data, we investigate audio data narratives –which combine textual descriptions and sonification (the mapping of data to non-speech sounds). We conduct two co-design workshops with screen reader users to define design principles that guide the structure, content, and duration of a data narrative. Based on these principles and relevant auditory processing characteristics, we propose a dynamic programming approach to automatically generate an audio data narrative from a given dataset. We evaluate our approach with 16 screen reader users. Findings show with audio narratives, users gain significantly more insights from the data. Users describe data narratives help them better extract and comprehend the information in both the sonification and description.

著者
Alexa F. Siu
Stanford University, Stanford, California, United States
Gene S-H. Kim
Stanford University, Stanford, California, United States
Sile O'Modhrain
University of Michigan, Ann Arbor, Michigan, United States
Sean Follmer
Stanford University, Stanford, California, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3517678

動画

会議: CHI 2022

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2022.acm.org/)

セッション: Accessibility and Data Visualization

383-385
5 件の発表
2022-05-03 23:15:00
2022-05-04 00:30:00