Access for People with Visual Impairment

[B] Paper Room 01, 2021-05-13 01:00:00~2021-05-13 03:00:00 / [C] Paper Room 01, 2021-05-13 09:00:00~2021-05-13 11:00:00 / [A] Paper Room 01, 2021-05-12 17:00:00~2021-05-12 19:00:00

会議の名前
CHI 2021
Tactile Fixations: A Behavioral Marker on How People with Visual Impairments Explore Raised-line Graphics
要旨

Raised-line graphics are tactile documents made for people with visual impairments (VI). Their exploration relies on a complex two-handed behavior. To better understand the cognitive processes underlying this exploration, we proposed a new method based on “tactile fixations”. A tactile fixation occurs when a finger is stationary within a specific spatial and temporal window. It is known that stationary fingers play an active role when exploring tactile graphics, but they have never been defined or studied before. In this study, we first defined the concept of tactile fixation, then we conducted a behavioral study with ten participants with VI in order to assess the role of tactile fixations under different conditions. The results show that tactile fixations vary according to different factors such as the graphic type as well as the involved hand and the aim of the exploration.

著者
Kaixing Zhao
University of Toulouse, Toulouse, France
Sandra Bardot
University of Manitoba, Winnipeg, Manitoba, Canada
Marcos Serrano
IRIT - Elipse, Toulouse, France
Mathieu Simonnet
IMT Atlantique, Brest, France
Bernard Oriola
CNRS / UPS, Toulouse, France
Christophe Jouffrais
CNRS, Singapore, Singapore
DOI

10.1145/3411764.3445578

論文URL

https://doi.org/10.1145/3411764.3445578

動画
Tactile Compass: Enabling Visually Impaired People to Follow a Path with Continuous Directional Feedback
要旨

Accurate and effective directional feedback is crucial for an electronic traveling aid device that guides visually impaired people in walking through paths. This paper presents Tactile Compass, a hand-held device that provides continuous directional feedback with a rotatable needle pointing toward the planned direction. We conducted two lab studies to evaluate the effectiveness of the feedback solution. Results showed that using Tactile Compass, participants could reach the target direction in place with a mean deviation of 3.03° and could smoothly navigate along paths of 60cm width, with a mean deviation from the centerline of 12.1cm. Subjective feedback showed that Tactile Compass was easy to learn and use.

著者
Guanhong Liu
Tsinghua University, Beijing, China
Tianyu Yu
Tsinghua University, Beijing, China
Chun Yu
Tsinghua University, beijing, beijing, China
Haiqing Xu
Tsinghua University, Beijing, China
Shuchang Xu
Tsinghua University, Beijing, Beijing, China
Ciyuan Yang
Tsinghua University, beijing, beijing, China
Feng Wang
Tsinghua University, Beijing, China
Haipeng Mi
Tsinghua University, Beijing, China
Yuanchun Shi
Tsinghua University, Beijing, China
DOI

10.1145/3411764.3445644

論文URL

https://doi.org/10.1145/3411764.3445644

動画
ThroughHand: 2D Tactile Interaction to Simultaneously Recognize and Touch Multiple Objects
要旨

Users with visual impairments find it difficult to enjoy real-time 2D interactive applications on the touchscreen. Touchscreen applications such as sports games often require simultaneous recognition of and interaction with multiple moving targets through vision. To mitigate this issue, we propose ThroughHand, a novel tactile interaction that enables users with visual impairments to interact with multiple dynamic objects in real time. We designed the ThroughHand interaction to utilize the potential of the human tactile sense that spatially registers both sides of the hand with respect to each other. ThroughHand allows interaction with multiple objects by enabling users to perceive the objects using the palm while providing a touch input space on the back of the same hand. A user study verified that ThroughHand enables users to locate stimuli on the palm with a margin of error of approximately 13 mm and effectively provides a real-time 2D interaction experience for users with visual impairments.

受賞
Honorable Mention
著者
Jingun Jung
KAIST, Daejeon, Korea, Republic of
Sunmin Son
School of Computing, KAIST, Daejeon, Korea, Republic of
Sangyoon Lee
KAIST, Daejeon, Korea, Republic of
Yeonsu Kim
KAIST, Daejeon, Korea, Republic of
Geehyuk Lee
School of Computing, KAIST, Daejeon, Korea, Republic of
DOI

10.1145/3411764.3445530

論文URL

https://doi.org/10.1145/3411764.3445530

動画
Exploring Technology Design for Students with Vision Impairment in the Classroom and Remotely
要旨

Teachers of the Visually Impaired (TVIs) teach academic and functional living skills simultaneously to prepare students with vision impairment to be successful and independent. Current educational tools primarily focus on academic instruction rather than this multifaceted approach needed for students. Our work aims to understand how technology can integrate behavioral skills, like independence, and support TVIs in their preferred teaching strategy. We observed elementary classrooms at a school for the blind for six weeks to study how educators design lessons and use technology to supplement their instruction in different subjects. After the observational study, we conducted remote interviews with educators to understand how technology can support students in building academic and behavioral skills in-person and remotely. Educators suggested incorporating audio feedback that motivates students to play and learn consistently, student progress tracking for parents and educators, and designing features that help students build independence and develop collaborative skills.

著者
Vinitha Gadiraju
University of Colorado Boulder, Boulder, Colorado, United States
Olwyn Doyle
University of Colorado Boulder, Boulder, Colorado, United States
Shaun Kane
University of Colorado Boulder, Boulder, Colorado, United States
DOI

10.1145/3411764.3445755

論文URL

https://doi.org/10.1145/3411764.3445755

動画
Community Based Robot Design for Classrooms with Mixed Visual Abilities Children
要旨

Visually impaired children (VI) face challenges in collaborative learning in classrooms. Robots have the potential to support inclusive classroom experiences by leveraging their physicality, bespoke social behaviors, sensors, and multimodal feedback. However, the design of social robots for mixed-visual abilities classrooms remains mostly unexplored. This paper presents a four-month-long community-based design process where we engaged with a school community. We provide insights into the barriers experienced by children and how social robots can address them. We also report on a participatory design activity with mixed-visual abilities children, highlighting the expected roles, attitudes, and physical characteristics of robots. Findings contextualize social robots within inclusive classroom settings as a holistic solution that can interact anywhere when needed and suggest a broader view of inclusion beyond disability. These include children's personality traits, technology access, and mastery of school subjects. We finish by providing reflections on the community-based design process.

著者
Isabel Neto
University of Lisbon , Lisbon, Portugal
Hugo Nicolau
Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal
Ana Paiva
University of Lisbon, Lisbon, Portugal
DOI

10.1145/3411764.3445135

論文URL

https://doi.org/10.1145/3411764.3445135

動画
LightWrite: Teach Handwriting to The Visually Impaired with A Smartphone
要旨

Learning to write is challenging for blind and low vision (BLV) people because of the lack of visual feedback. Regardless of the drastic advancement of digital technology, handwriting is still an essential part of daily life. Although tools designed for teaching BLV to write exist, many are expensive and require the help of sighted teachers. We propose LightWrite, a low-cost, easy-to-access smartphone application that uses voice-based descriptive instruction and feedback to teach BLV users to write English lowercase letters and Arabian digits in a specifically designed font. A two-stage study with 15 BLV users with little prior writing knowledge shows that LightWrite can successfully teach users to learn handwriting characters in an average of 1.09 minutes for each letter. After initial training and 20-minute daily practice for 5 days, participants were able to write an average of 19.9 out of 26 letters that are recognizable by sighted raters.

著者
Zihan Wu
Tsinghua University, Beijing, China
Chun Yu
Tsinghua University, Beijing, China
Xuhai Xu
University of Washington, Seattle, Washington, United States
Tong Wei
Tsinghua University, Beijing, China
Tianyuan Zou
Tsinghua University, Beijing, China
Ruolin Wang
UCLA, Los Angeles, California, United States
Yuanchun Shi
Tsinghua University, Beijing, China
DOI

10.1145/3411764.3445322

論文URL

https://doi.org/10.1145/3411764.3445322

動画
LineChaser: A Smartphone-Based Navigation System for Blind People to Stand in Line
要旨

Standing in line is one of the most common social behaviors in public spaces but can be challenging for blind people. We propose an assistive system named LineChaser, which navigates a blind user to the end of a line and continuously reports the distance and direction to the last person in the line so that they can be followed. LineChaser uses the RGB camera in a smartphone to detect nearby pedestrians, and the built-in infrared depth sensor to estimate their position. Via pedestrian position estimations, LineChaser determines whether nearby pedestrians are standing in line, and uses audio and vibration signals to notify the user when they should start/stop moving forward. In this way, users can stay correctly positioned while maintaining social distance. We have conducted a usability study with 12 blind participants. LineChaser allowed blind participants to successfully navigate lines, significantly increasing their confidence in standing in lines.

著者
Masaki Kuribayashi
Waseda University, Tokyo, Japan
Seita Kayukawa
Waseda University, Tokyo, Japan
Hironobu Takagi
IBM Research - Tokyo, Tokyo, Japan
Chieko Asakawa
IBM, Yorktown Heights, New York, United States
Shigeo Morishima
Waseda Research Institute for Science and Engineering, Tokyo, Japan
DOI

10.1145/3411764.3445451

論文URL

https://doi.org/10.1145/3411764.3445451

動画
Smartphone Usage by Expert Blind Users
要旨

People with vision impairments access smartphones with the help of screen reader apps such as TalkBack for Android and VoiceOver for iPhone. Prior research has mostly focused on understanding touchscreen phone adoption and typing performance of novice blind users by logging their real-world smartphone usage. Understanding smartphone usage pattern and practices of expert users can help in developing tools and tutorials for transitioning novice and intermediate users to expert users. In this work, we logged smartphone usage data of eight expert Android smartphone users with visual impairments for four weeks, and then interviewed them. This paper presents a detailed analysis that uncovered novel usage patterns, such as extensive usage of directional gestures, reliance on voice and external keyboard for text input, and repurposed explore by touch for single-tap. We conclude with design recommendations to inform the future of mobile accessibility, including hardware guidelines and rethinking accessible software design.

著者
Mohit Jain
Microsoft Research, Bangalore, Karnataka, India
Nirmalendu Diwakar
Microsoft Research, Bangalore, India
Manohar Swaminathan
Microsoft Research, Bangalore, Karnataka, India
DOI

10.1145/3411764.3445074

論文URL

https://doi.org/10.1145/3411764.3445074

動画
Examining Visual Semantic Understanding in Blind and Low-Vision Technology Users
要旨

Visual semantics provide spatial information like size, shape, and position, which are necessary to understand and efficiently use interfaces and documents. Yet little is known about whether blind and low-vision (BLV) technology users want to interact with visual affordances, and, if so, for which task scenarios. In this work, through semi-structured and task-based interviews, we explore preferences, interest levels, and use of visual semantics among BLV technology users across two device platforms (smartphones and laptops), and information seeking and interactions common in apps and web browsing. Findings show that participants could benefit from access to visual semantics for collaboration, navigation, and design. To learn this information, our participants used trial and error, sighted assistance, and features in existing screen reading technology like touch exploration. Finally, we found that missing information and inconsistent screen reader representations of user interfaces hinder learning. We discuss potential applications and future work to equip BLV users with necessary information to engage with visual semantics.

著者
Venkatesh Potluri
University of Washington, Seattle, Washington, United States
Tadashi E. Grindeland
University of Washington, Seattle, Washington, United States
Jon E.. Froehlich
University of Washington, Seattle, Washington, United States
Jennifer Mankoff
University of Washington, Seattle, Washington, United States
DOI

10.1145/3411764.3445040

論文URL

https://doi.org/10.1145/3411764.3445040

動画
From Tactile to NavTile: Opportunities and Challenges for Multi-Modal Feedback in Guiding Surfaces during Non-Visual Navigation
要旨

Tactile guiding surfaces in the built environment have held a contentious place in the process of navigation by people who are blind or visually impaired. Despite standards for tactile guiding surfaces, problems persist with inconsistent implementation, perception, and geographic orientation. We investigate the role of tactile cues in non-visual navigation and attitudes surrounding guiding surfaces through a survey of 67 people with vision impairments and ten interviews with navigation and public accessibility experts. Our participants revealed several opportunities to augment existing tactile surfaces while envisioning novel multimodal feedback solutions in immediately relevant contexts. We also propose an approach for designing and exploring low cost, multimodal tactile surfaces, which we call navtiles. Finally, we discuss practical aspects of implementation for new design alternatives such as standardization, installation, movability, discoverability, and a need for transparency. Collectively, these insights contribute to the production and implementation of novel multimodal navigation aids.

著者
Sai Ganesh. Swaminathan
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Yellina Yim
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Scott E. Hudson
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Cynthia L. Bennett
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Patrick Carrington
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3411764.3445716

論文URL

https://doi.org/10.1145/3411764.3445716

動画
Voicemoji: Emoji Entry Using Voice for Visually Impaired People
要旨

Keyboard-based emoji entry can be challenging for people with visual impairments: users have to sequentially navigate emoji lists using screen readers to find their desired emojis, which is a slow and tedious process. In this work, we explore the design and benefits of emoji entry with speech input, a popular text entry method among people with visual impairments. After conducting interviews to understand blind or low vision (BLV) users’ current emoji input experiences, we developed Voicemoji, which (1) outputs relevant emojis in response to voice commands, and (2) provides context-sensitive emoji suggestions through speech output. We also conducted a multi-stage evaluation study with six BLV participants from the United States and six BLV participants from China, finding that Voicemoji significantly reduced entry time by 91.2% and was preferred by all participants over the Apple iOS keyboard. Based on our findings, we present Voicemoji as a feasible solution for voice-based emoji entry.

著者
Mingrui Ray. Zhang
University of Washington, Seattle, Washington, United States
Ruolin Wang
UCLA, Los Angeles, California, United States
Xuhai Xu
University of Washington, Seattle, Washington, United States
Qisheng Li
University of Washington, Seattle, Washington, United States
Ather Sharif
University of Washington, Seattle, Washington, United States
Jacob O.. Wobbrock
University of Washington, Seattle, Washington, United States
DOI

10.1145/3411764.3445338

論文URL

https://doi.org/10.1145/3411764.3445338

動画