Touchscreen-based rendering of graphics using vibrations, sonification, and text-to-speech is a promising approach for nonvisual access to graphical information, but extracting trends from complex data representations nonvisually is challenging. This work presents the design of a multimodal feedback scheme with integrated spatial audio for the exploration of histograms and scatter plots on touchscreens. We detail the hardware employed and the algorithms used to control vibrations and sonification adjustments through the change of pitch and directional stereo output. We conducted formative testing with 5 blind or visually impaired participants, and results illustrate that spatial audio has the potential to increase the identification of trends in the data, at the expense of a skewed mental representation of the graph. This design work and pilot study are critical to the iterative, human-centered approach of rendering multimodal graphics on touchscreens and contribute a new scheme for efficiently capturing data trends in complex data representations.
https://doi.org/10.1145/3613904.3641959
The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)