45. Access for People with Visual Impairment

Tactile Fixations: A Behavioral Marker on How People with Visual Impairments Explore Raised-line Graphics
説明

Raised-line graphics are tactile documents made for people with visual impairments (VI). Their exploration relies on a complex two-handed behavior. To better understand the cognitive processes underlying this exploration, we proposed a new method based on “tactile fixations”. A tactile fixation occurs when a finger is stationary within a specific spatial and temporal window. It is known that stationary fingers play an active role when exploring tactile graphics, but they have never been defined or studied before. In this study, we first defined the concept of tactile fixation, then we conducted a behavioral study with ten participants with VI in order to assess the role of tactile fixations under different conditions. The results show that tactile fixations vary according to different factors such as the graphic type as well as the involved hand and the aim of the exploration.

日本語まとめ
読み込み中…
読み込み中…
Tactile Compass: Enabling Visually Impaired People to Follow a Path with Continuous Directional Feedback
説明

Accurate and effective directional feedback is crucial for an electronic traveling aid device that guides visually impaired people in walking through paths. This paper presents Tactile Compass, a hand-held device that provides continuous directional feedback with a rotatable needle pointing toward the planned direction. We conducted two lab studies to evaluate the effectiveness of the feedback solution. Results showed that using Tactile Compass, participants could reach the target direction in place with a mean deviation of 3.03° and could smoothly navigate along paths of 60cm width, with a mean deviation from the centerline of 12.1cm. Subjective feedback showed that Tactile Compass was easy to learn and use.

日本語まとめ
読み込み中…
読み込み中…
ThroughHand: 2D Tactile Interaction to Simultaneously Recognize and Touch Multiple Objects
説明

Users with visual impairments find it difficult to enjoy real-time 2D interactive applications on the touchscreen. Touchscreen applications such as sports games often require simultaneous recognition of and interaction with multiple moving targets through vision. To mitigate this issue, we propose ThroughHand, a novel tactile interaction that enables users with visual impairments to interact with multiple dynamic objects in real time. We designed the ThroughHand interaction to utilize the potential of the human tactile sense that spatially registers both sides of the hand with respect to each other. ThroughHand allows interaction with multiple objects by enabling users to perceive the objects using the palm while providing a touch input space on the back of the same hand. A user study verified that ThroughHand enables users to locate stimuli on the palm with a margin of error of approximately 13 mm and effectively provides a real-time 2D interaction experience for users with visual impairments.

日本語まとめ
読み込み中…
読み込み中…
Exploring Technology Design for Students with Vision Impairment in the Classroom and Remotely
説明

Teachers of the Visually Impaired (TVIs) teach academic and functional living skills simultaneously to prepare students with vision impairment to be successful and independent. Current educational tools primarily focus on academic instruction rather than this multifaceted approach needed for students. Our work aims to understand how technology can integrate behavioral skills, like independence, and support TVIs in their preferred teaching strategy. We observed elementary classrooms at a school for the blind for six weeks to study how educators design lessons and use technology to supplement their instruction in different subjects. After the observational study, we conducted remote interviews with educators to understand how technology can support students in building academic and behavioral skills in-person and remotely. Educators suggested incorporating audio feedback that motivates students to play and learn consistently, student progress tracking for parents and educators, and designing features that help students build independence and develop collaborative skills.

日本語まとめ
読み込み中…
読み込み中…
Community Based Robot Design for Classrooms with Mixed Visual Abilities Children
説明

Visually impaired children (VI) face challenges in collaborative learning in classrooms. Robots have the potential to support inclusive classroom experiences by leveraging their physicality, bespoke social behaviors, sensors, and multimodal feedback. However, the design of social robots for mixed-visual abilities classrooms remains mostly unexplored. This paper presents a four-month-long community-based design process where we engaged with a school community. We provide insights into the barriers experienced by children and how social robots can address them. We also report on a participatory design activity with mixed-visual abilities children, highlighting the expected roles, attitudes, and physical characteristics of robots. Findings contextualize social robots within inclusive classroom settings as a holistic solution that can interact anywhere when needed and suggest a broader view of inclusion beyond disability. These include children's personality traits, technology access, and mastery of school subjects. We finish by providing reflections on the community-based design process.

日本語まとめ
読み込み中…
読み込み中…
LightWrite: Teach Handwriting to The Visually Impaired with A Smartphone
説明

Learning to write is challenging for blind and low vision (BLV) people because of the lack of visual feedback. Regardless of the drastic advancement of digital technology, handwriting is still an essential part of daily life. Although tools designed for teaching BLV to write exist, many are expensive and require the help of sighted teachers. We propose LightWrite, a low-cost, easy-to-access smartphone application that uses voice-based descriptive instruction and feedback to teach BLV users to write English lowercase letters and Arabian digits in a specifically designed font. A two-stage study with 15 BLV users with little prior writing knowledge shows that LightWrite can successfully teach users to learn handwriting characters in an average of 1.09 minutes for each letter. After initial training and 20-minute daily practice for 5 days, participants were able to write an average of 19.9 out of 26 letters that are recognizable by sighted raters.

日本語まとめ
読み込み中…
読み込み中…
LineChaser: A Smartphone-Based Navigation System for Blind People to Stand in Line
説明

Standing in line is one of the most common social behaviors in public spaces but can be challenging for blind people. We propose an assistive system named LineChaser, which navigates a blind user to the end of a line and continuously reports the distance and direction to the last person in the line so that they can be followed. LineChaser uses the RGB camera in a smartphone to detect nearby pedestrians, and the built-in infrared depth sensor to estimate their position. Via pedestrian position estimations, LineChaser determines whether nearby pedestrians are standing in line, and uses audio and vibration signals to notify the user when they should start/stop moving forward. In this way, users can stay correctly positioned while maintaining social distance. We have conducted a usability study with 12 blind participants. LineChaser allowed blind participants to successfully navigate lines, significantly increasing their confidence in standing in lines.

日本語まとめ
読み込み中…
読み込み中…
Smartphone Usage by Expert Blind Users
説明

People with vision impairments access smartphones with the help of screen reader apps such as TalkBack for Android and VoiceOver for iPhone. Prior research has mostly focused on understanding touchscreen phone adoption and typing performance of novice blind users by logging their real-world smartphone usage. Understanding smartphone usage pattern and practices of expert users can help in developing tools and tutorials for transitioning novice and intermediate users to expert users. In this work, we logged smartphone usage data of eight expert Android smartphone users with visual impairments for four weeks, and then interviewed them. This paper presents a detailed analysis that uncovered novel usage patterns, such as extensive usage of directional gestures, reliance on voice and external keyboard for text input, and repurposed explore by touch for single-tap. We conclude with design recommendations to inform the future of mobile accessibility, including hardware guidelines and rethinking accessible software design.

日本語まとめ
読み込み中…
読み込み中…
Examining Visual Semantic Understanding in Blind and Low-Vision Technology Users
説明

Visual semantics provide spatial information like size, shape, and position, which are necessary to understand and efficiently use interfaces and documents. Yet little is known about whether blind and low-vision (BLV) technology users want to interact with visual affordances, and, if so, for which task scenarios. In this work, through semi-structured and task-based interviews, we explore preferences, interest levels, and use of visual semantics among BLV technology users across two device platforms (smartphones and laptops), and information seeking and interactions common in apps and web browsing. Findings show that participants could benefit from access to visual semantics for collaboration, navigation, and design. To learn this information, our participants used trial and error, sighted assistance, and features in existing screen reading technology like touch exploration. Finally, we found that missing information and inconsistent screen reader representations of user interfaces hinder learning. We discuss potential applications and future work to equip BLV users with necessary information to engage with visual semantics.

日本語まとめ
読み込み中…
読み込み中…
From Tactile to NavTile: Opportunities and Challenges for Multi-Modal Feedback in Guiding Surfaces during Non-Visual Navigation
説明

Tactile guiding surfaces in the built environment have held a contentious place in the process of navigation by people who are blind or visually impaired. Despite standards for tactile guiding surfaces, problems persist with inconsistent implementation, perception, and geographic orientation. We investigate the role of tactile cues in non-visual navigation and attitudes surrounding guiding surfaces through a survey of 67 people with vision impairments and ten interviews with navigation and public accessibility experts. Our participants revealed several opportunities to augment existing tactile surfaces while envisioning novel multimodal feedback solutions in immediately relevant contexts. We also propose an approach for designing and exploring low cost, multimodal tactile surfaces, which we call navtiles. Finally, we discuss practical aspects of implementation for new design alternatives such as standardization, installation, movability, discoverability, and a need for transparency. Collectively, these insights contribute to the production and implementation of novel multimodal navigation aids.

日本語まとめ
読み込み中…
読み込み中…
Voicemoji: Emoji Entry Using Voice for Visually Impaired People
説明

Keyboard-based emoji entry can be challenging for people with visual impairments: users have to sequentially navigate emoji lists using screen readers to find their desired emojis, which is a slow and tedious process. In this work, we explore the design and benefits of emoji entry with speech input, a popular text entry method among people with visual impairments. After conducting interviews to understand blind or low vision (BLV) users’ current emoji input experiences, we developed Voicemoji, which (1) outputs relevant emojis in response to voice commands, and (2) provides context-sensitive emoji suggestions through speech output. We also conducted a multi-stage evaluation study with six BLV participants from the United States and six BLV participants from China, finding that Voicemoji significantly reduced entry time by 91.2% and was preferred by all participants over the Apple iOS keyboard. Based on our findings, we present Voicemoji as a feasible solution for voice-based emoji entry.

日本語まとめ
読み込み中…
読み込み中…