Supporting Accessible Data Visualization Through Audio Data Narratives
説明

Online data visualizations play an important role in informing public opinion but are often inaccessible to screen reader users. To address the need for accessible data representations on the web that provide direct, multimodal, and up-to-date access to the data, we investigate audio data narratives –which combine textual descriptions and sonification (the mapping of data to non-speech sounds). We conduct two co-design workshops with screen reader users to define design principles that guide the structure, content, and duration of a data narrative. Based on these principles and relevant auditory processing characteristics, we propose a dynamic programming approach to automatically generate an audio data narrative from a given dataset. We evaluate our approach with 16 screen reader users. Findings show with audio narratives, users gain significantly more insights from the data. Users describe data narratives help them better extract and comprehend the information in both the sonification and description.

日本語まとめ
読み込み中…
読み込み中…
Slide-Tone and Tilt-Tone: 1-DOF Haptic Techniques for Conveying Shape Characteristics of Graphs to Blind Users
説明

We increasingly rely on up-to-date, data-driven graphs to understand our environments and make informed decisions. However, many of the methods blind and visually impaired users (BVI) rely on to access data-driven information do not convey important shape-characteristics of graphs, are not refreshable, or are prohibitively expensive. To address these limitations, we introduce two refreshable, 1-DOF audio-haptic interfaces based on haptic cues fundamental to object shape perception. Slide-tone uses finger position with sonification, and Tilt-tone uses fingerpad contact inclination with sonification to provide shape feedback to users. Through formative design workshops (n = 3) and controlled evaluations (n = 8), we found that BVI participants appreciated the additional shape information, versatility, and reinforced understanding these interfaces provide; and that task accuracy was comparable to using interactive tactile graphics or sonification alone. Our research offers insight into the benefits, limitations, and considerations for adopting these haptic cues into a data visualization context.

日本語まとめ
読み込み中…
読み込み中…
VoxLens: Making Online Data Visualizations Accessible with an Interactive JavaScript Plug-In
説明

JavaScript visualization libraries are widely used to create online data visualizations but provide limited access to their information for screen-reader users. Building on prior findings about the experiences of screen-reader users with online data visualizations, we present VoxLens, an open-source JavaScript plug-in that--with a single line of code--improves the accessibility of online data visualizations for screen-reader users using a multi-modal approach. Specifically, VoxLens enables screen-reader users to obtain a holistic summary of presented information, play sonified versions of the data, and interact with visualizations in a "drill-down" manner using voice-activated commands. Through task-based experiments with 21 screen-reader users, we show that VoxLens improves the accuracy of information extraction and interaction time by 122% and 36%, respectively, over existing conventional interaction with online data visualizations. Our interviews with screen-reader users suggest that VoxLens is a "game-changer" in making online data visualizations accessible to screen-reader users, saving them time and effort.

日本語まとめ
読み込み中…
読み込み中…
Improving Colour Patterns to Assist People with Colour Vision Deficiency
説明

Many daily tasks rely on accurately identifying and distinguishing between different colours. However, these tasks can be frustrating and potentially dangerous for people with Colour Vision Deficiency (CVD). Despite prior work exploring how pattern overlays on top of colours can support people with CVD, the solutions were often unintuitive or required significant training to become proficient. We address this problem by creating two new colour patterns (ColourIconizer, ColourMix). We evaluated these patterns against a previously published colour pattern (ColourMeters) using an online evaluation with three new colour identification tasks (Selection Task, Transition Task, Sorting Task). ColourMeters helped with the Transition Task, but struggled with the Selection and Sorting Tasks. Conversely, ColourIconizer helped with the Selection and Sorting Tasks but struggled to help on the Transition Task. ColourMix provided general assistance on all tasks. Our combined results help inform and improve the design of future colour patterns.

日本語まとめ
読み込み中…
読み込み中…
Infosonics: Accessible Infographics for People who are Blind using Sonification and Voice
説明

Data visualisations are increasingly used online to engage readers and enable independent analysis of the data underlying news stories. However, access to such infographics is problematic for readers who are blind or have low vision (BLV). Equitable access to information is a basic human right and essential for independence and inclusion. We introduce infosonics, the audio equivalent of infographics, as a new style of interactive sonification that uses a spoken introduction and annotation, non-speech audio and sound design elements to present data in an understandable and engaging way. A controlled user evaluation with 18 BLV adults found a COVID-19 infosonic enabled a clearer mental image than a traditional sonification. Further, infosonics prove complementary to text descriptions and facilitate independent understanding of the data. Based on our findings, we provide preliminary suggestions for infosonics design, which we hope will enable BLV people to gain equitable access to online news and information.

日本語まとめ
読み込み中…
読み込み中…