Assistive Interactions: Everyday Interactions for Users Who are Blind or Low Vision

会議の名前
CHI 2024
Help Supporters: Exploring the Design Space of Assistive Technologies to Support Face-to-Face Help Between Blind and Sighted Strangers
要旨

Blind and low-vision (BLV) people face many challenges when venturing into public environments, often wishing it were easier to get help from people nearby. Ironically, while many sighted individuals are willing to help, such interactions are infrequent. Asking for help is socially awkward for BLV people, and sighted people lack experience in helping BLV people. Through a mixed-ability research-through-design process, we explore four diverse approaches toward how assistive technology can serve as help supporters that collaborate with both BLV and sighted parties throughout the help process. These approaches span two phases: the connection phase (finding someone to help) and the collaboration phase (facilitating help after finding someone). Our findings from a 20-participant mixed-ability study reveal how help supporters can best facilitate connection, which types of information they should present during both phases, and more. We discuss design implications for future approaches to support face-to-face help.

著者
Yuanyang Teng
Columbia University, New York, New York, United States
Connor Courtien
Hunter College, New York, New York, United States
David Angel. Rios
Columbia University, New York, New York, United States
Yves M. Tseng
Columbia University, New York, New York, United States
Jacqueline Gibson
Columbia University, New York, New York, United States
Maryam Aziz
Duke University , Durham, North Carolina, United States
Avery Reyna
University of Central Florida, Orlando, Florida, United States
Rajan Vaish
Easel AI, Inc., Los Angeles, California, United States
Brian A.. Smith
Columbia University, New York, New York, United States
論文URL

doi.org/10.1145/3613904.3642816

動画
Visual Cues for Data Analysis Features Amplify Challenges for Blind Spreadsheet Users
要旨

Spreadsheets are widely used for storing, manipulating, analyzing, and visualizing data. Features such as conditional formatting, formulas, sorting, and filtering play an important role when understanding and analyzing data in spreadsheets. They employ visual cues, but we have little understanding of the experiences of blind screen reader (SR) users with such features. We conducted a study with 12 blind SR users to gain insights into their challenges, workarounds, and strategies in understanding and extracting information from a spreadsheet consisting of multiple tables that incorporated data analysis features. We identified five factors that impact blind SR users' experiences: cognitive overload, time-information trade-off, lack of awareness and expertise, inadequate system feedback, and delayed and absent SR responses. Drawn from these findings, we discuss design suggestions and future research agenda to improve SR users' spreadsheet experiences.

著者
Minoli Perera
Monash University, Clayton, Victoria, Australia
Bongshin Lee
Microsoft Research, Redmond, Washington, United States
Eun Kyoung Choe
University of Maryland, College Park, Maryland, United States
Kim Marriott
Monash University, Melbourne, Australia
論文URL

doi.org/10.1145/3613904.3642753

動画
A Contextual Inquiry of People with Vision Impairments in Cooking
要旨

Individuals with vision impairments employ a variety of strategies for object identification, such as pans or soy sauce, in the culinary process. In addition, they often rely on contextual details about objects, such as location, orientation, and current status, to autonomously execute cooking activities. To understand how people with vision impairments collect and use the contextual information of objects while cooking, we conducted a contextual inquiry study with 12 participants in their own kitchens. This research aims to analyze object interaction dynamics in culinary practices to enhance assistive vision technologies for visually impaired cooks. We outline eight different types of contextual information and the strategies that blind cooks currently use to access the information while preparing meals. Further, we discuss preferences for communicating contextual information about kitchen objects as well as considerations for the deployment of AI-powered assistive technologies.

著者
Franklin Mingzhe Li
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Michael Xieyang Liu
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Shaun K.. Kane
Google Research, Boulder, Colorado, United States
Patrick Carrington
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

doi.org/10.1145/3613904.3642233

動画
Towards Inclusive Source Code Readability Based on the Preferences of Programmers with Visual Impairments
要旨

Code readability is crucial for program comprehension, maintenance, and collaboration. However, many of the standards for writing readable code are derived from sighted developers' readability needs. We conducted a qualitative study with 16 blind and visually impaired (BVI) developers to better understand their readability preferences for common code formatting rules such as identifier naming conventions, line length, and the use of indentation. Our findings reveal how BVI developers' preferences contrast with those of sighted developers and how we can expand the existing rules to improve code readability on screen readers. Based on the findings, we contribute an inclusive understanding of code readability and derive implications for programming languages, development environments, and style guides. Our work helps broaden the meaning of readable code in software engineering and accessibility research.

著者
Maulishree Pandey
University of Michigan, Ann Arbor, Michigan, United States
Steve Oney
University of Michigan, Ann Arbor, Michigan, United States
Andrew Begel
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
論文URL

doi.org/10.1145/3613904.3642512

動画
FetchAid: Making Parcel Lockers More Accessible to Blind and Low Vision People With Deep-learning Enhanced Touchscreen Guidance, Error-Recovery Mechanism, and AR-based Search Support
要旨

Parcel lockers have become an increasingly prevalent last-mile delivery method. However, a recent study revealed their accessibility challenges to blind and low-vision (BLV) people. Informed by the study, we designed FetchAid, a standalone intelligent mobile app assisting BLV in using a parcel locker in real-time by integrating computer vision and augmented reality (AR) technologies. FetchAid first uses a deep network to detect the user's fingertip and relevant buttons on the touch screen of the parcel locker to guide the user to reveal and scan the QR code to open the target compartment door and then interactively guide the user to reach the door safely with AR-based context-aware audio feedback. Moreover, FetchAid provides an error-recovery mechanism and real-time feedback to keep the user on track. We evaluated it with 12 BLV people and found that FetchAid substantially improved task accomplishment and efficiency, and reduced frustration and overall effort.

受賞
Honorable Mention
著者
Zhitong Guan
The Hong Kong University of Science and Technology (Guangzhou) , Guangzhou, China
Zeyu Xiong
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Mingming Fan
The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
論文URL

doi.org/10.1145/3613904.3642213

動画