Spatial Audio-Enhanced Multimodal Graph Rendering for Efficient Data Trend Learning on Touchscreen Devices

要旨

Touchscreen-based rendering of graphics using vibrations, sonification, and text-to-speech is a promising approach for nonvisual access to graphical information, but extracting trends from complex data representations nonvisually is challenging. This work presents the design of a multimodal feedback scheme with integrated spatial audio for the exploration of histograms and scatter plots on touchscreens. We detail the hardware employed and the algorithms used to control vibrations and sonification adjustments through the change of pitch and directional stereo output. We conducted formative testing with 5 blind or visually impaired participants, and results illustrate that spatial audio has the potential to increase the identification of trends in the data, at the expense of a skewed mental representation of the graph. This design work and pilot study are critical to the iterative, human-centered approach of rendering multimodal graphics on touchscreens and contribute a new scheme for efficiently capturing data trends in complex data representations.

著者
Wilfredo Joshua. Robinson Moore
Saint Louis University, St. Louis, Missouri, United States
Medhani Kalal
Saint Louis University, St. Louis, Missouri, United States
Jennifer L.. Tennison
Saint Louis University, Swansea, Illinois, United States
Nicholas A. Giudice
University of Maine, Orono, Maine, United States
Jenna Gorlewicz
Saint Louis University, St. Louis, Missouri, United States
論文URL

doi.org/10.1145/3613904.3641959

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Data Visualization: Charts

314
5 件の発表
2024-05-13 23:00:00
2024-05-14 00:20:00