Umwelt: Accessible Structured Editing of Multi-Modal Data Representations

要旨

We present Umwelt, an authoring environment for interactive multimodal data representations. In contrast to prior approaches, which center the visual modality, Umwelt treats visualization, sonification, and textual description as coequal representations: they are all derived from a shared abstract data model, such that no modality is prioritized over the others. To simplify specification, Umwelt evaluates a set of heuristics to generate default multimodal representations that express a dataset's functional relationships. To support smoothly moving between representations, Umwelt maintains a shared query predicated that is reified across all modalities — for instance, navigating the textual description also highlights the visualization and filters the sonification. In a study with 5 blind / low-vision expert users, we found that Umwelt's multimodal representations afforded complementary overview and detailed perspectives on a dataset, allowing participants to fluidly shift between task- and representation-oriented ways of thinking.

著者
Jonathan Zong
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Isabella Pedraza Pineros
M.I.T., Cambridge, Massachusetts, United States
Mengzhu (Katie) Chen
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Daniel Hajas
University College London, London, United Kingdom
Arvind Satyanarayan
MIT, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3613904.3641996

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Assistive Interactions: Navigation and Visualisation for Users Who are Blind or Low Vision

311
5 件の発表
2024-05-15 18:00:00
2024-05-15 19:20:00