Nonvisual Support for Understanding and Reasoning about Data Structures

要旨

Blind and visually impaired (BVI) computer science students face systematic barriers when learning data structures: current accessibility approaches typically translate diagrams into alternative text, focusing on visual appearance rather than preserving the underlying structure essential for conceptual understanding. More accessible alternatives often do not scale in complexity, cost to produce, or both. Motivated by a recent shift to tools for creating visual diagrams from code, we propose a solution that automatically creates accessible representations from structural information about diagrams. Based on a Wizard-of-Oz study, we derive design requirements for an automated system, Arboretum, that compiles text-based diagram specifications into three synchronized nonvisual formats—tabular, navigable, and tactile. Our evaluation with BVI users highlights the strength of tactile graphics for complex tasks such as binary search; the benefits of offering multiple, complementary nonvisual representations; and limitations of existing digital navigation patterns for structural reasoning. This work reframes access to data structures by preserving their structural properties. The solution is a practical system to advance accessible CS education.

著者
Brianna L. Wimer
University of Notre Dame, South Bend, Indiana, United States
Ritesh Kanchi
Harvard University, Cambridge, Massachusetts, United States
Kaija Frierson
University of Arkansas, Fayetteville, Fayetteville, Arkansas, United States
Venkatesh Potluri
University of Michigan, Ann Arbor, Michigan, United States
Ronald Metoyer
University of Notre Dame, South Bend, Indiana, United States
Jennifer Mankoff
University of Washington, Seattle, Washington, United States
Miya Natsuhara
University of Washington, Seattle, Washington, United States
Matt Wang
University of Washington, Seattle, Washington, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Non-visual and conversational experiences

P1 - Room 125
6 件の発表
2026-04-17 18:00:00
2026-04-17 19:30:00