Braille literacy has fallen in recent years, and many blind children now grow up without learning Braille. However, learning Braille can increase employment chances and improve literacy skills. We introduce BrailleBlocks, a system to help visually impaired children learn and practice Braille alongside a sighted parent. BrailleBlocks comprises a set of tangible blocks and pegs, each block representing a Braille cell, and an associated application with games. The system automatically tracks and recognizes the blocks so that parents can follow along even if they cannot read Braille. We conducted a user study to test BrailleBlocks with five families, with five parents and six visually impaired children. The contributions of this work are a novel approach to Braille education toys, observations of how visually impaired children and sighted parents used this system together, their insights on current issues with Braille educational tools, and actionable feedback for future Braille-based learning tools.
Crosswords are a popular recreational game that relies on the spatial relationship between words. As a player answers clues, they begin to organize words to form an intersecting grid. A good non-visual representation should convey the interrelation of words and support the user in building a practical spatial image of the crossword grid. This paper looks at two approaches to representing a crossword puzzle for visually impaired users: a screen reader based crossword, and an audio-tactile crossword puzzle. We evaluate the designs in a study with 10 visually impaired participants. The audio-tactile representation was found to support the practical use of the crossword's spatial structure while the screen reader based puzzle leveraged participant's prior experience in navigating websites. The paper discusses critical aspects of our study and presents a perspective on the use of multimodal interfaces for such spatial applications.
While developments in 3D printing have opened up opportunities for improved access to graphical information for people who are blind or have low vision (BLV), they can provide only limited detailed and contextual information. Interactive 3D printed models (I3Ms) that provide audio labels and/or a conversational agent interface potentially overcome this limitation. We conducted a Wizard-of-Oz exploratory study to uncover the multi-modal interaction techniques that BLV people would like to use when exploring I3Ms, and investigated their attitudes towards different levels of model agency. These findings informed the creation of an I3M prototype of the solar system. A second user study with this model revealed a hierarchy of interaction, with BLV users preferring tactile exploration, followed by touch gestures to trigger audio labels, and then natural language to fill in knowledge gaps and confirm understanding.
Tactile materials are powerful teaching aids for students with visual impairments (VIs). To design these materials, designers must use modeling applications, which have high learning curves and rely on visual feedback. Today, Orientation and Mobility (O&M) specialists and teachers are often responsible for designing these materials. However, most of them do not have professional modeling skills, and many are visually impaired themselves. To address this issue, we designed Molder, an accessible design tool for interactive tactile maps, an important type of tactile materials that can help students learn O&M skills. A designer uses Molder to design a map using tangible input techniques, and Molder provides auditory feedback and high-contrast visual feedback. We evaluated Molder with 12 participants (8 with VIs, 4 sighted). After a 30-minute training session, the participants were all able to use Molder to design maps with customized tactile and interactive information.
Visualisations are commonly used to understand social, biological and other kinds of networks. Currently we do not know how to effectively present network data to people who are blind or have low-vision (BLV). We ran a controlled study with 8 BLV participants comparing four tactile representations: organic node-link diagram, grid node-link diagram, adjacency matrix and braille list. We found that the node-link representations were preferred and more effective for path following and cluster identification while the matrix and list were better for adjacency tasks. This is broadly in line with findings for the corresponding visual representations.