56. Assistive Interactions: Everyday Interactions for Users Who are Blind or Low Vision

Help Supporters: Exploring the Design Space of Assistive Technologies to Support Face-to-Face Help Between Blind and Sighted Strangers
説明

Blind and low-vision (BLV) people face many challenges when venturing into public environments, often wishing it were easier to get help from people nearby.

Ironically, while many sighted individuals are willing to help, such interactions are infrequent. Asking for help is socially awkward for BLV people, and sighted people lack experience in helping BLV people. Through a mixed-ability research-through-design process, we explore four diverse approaches toward how assistive technology can serve as help supporters that collaborate with both BLV and sighted parties throughout the help process. These approaches span two phases: the connection phase (finding someone to help) and the collaboration phase (facilitating help after finding someone). Our findings from a 20-participant mixed-ability study reveal how help supporters can best facilitate connection, which types of information they should present during both phases, and more. We discuss design implications for future approaches to support face-to-face help.

日本語まとめ
読み込み中…
読み込み中…
Visual Cues for Data Analysis Features Amplify Challenges for Blind Spreadsheet Users
説明

Spreadsheets are widely used for storing, manipulating, analyzing, and visualizing data. Features such as conditional formatting, formulas, sorting, and filtering play an important role when understanding and analyzing data in spreadsheets. They employ visual cues, but we have little understanding of the experiences of blind screen reader (SR) users with such features. We conducted a study with 12 blind SR users to gain insights into their challenges, workarounds, and strategies in understanding and extracting information from a spreadsheet consisting of multiple tables that incorporated data analysis features. We identified five factors that impact blind SR users' experiences: cognitive overload, time-information trade-off, lack of awareness and expertise, inadequate system feedback, and delayed and absent SR responses. Drawn from these findings, we discuss design suggestions and future research agenda to improve SR users' spreadsheet experiences.

日本語まとめ
読み込み中…
読み込み中…
A Contextual Inquiry of People with Vision Impairments in Cooking
説明

Individuals with vision impairments employ a variety of strategies for object identification, such as pans or soy sauce, in the culinary process. In addition, they often rely on contextual details about objects, such as location, orientation, and current status, to autonomously execute cooking activities. To understand how people with vision impairments collect and use the contextual information of objects while cooking, we conducted a contextual inquiry study with 12 participants in their own kitchens. This research aims to analyze object interaction dynamics in culinary practices to enhance assistive vision technologies for visually impaired cooks. We outline eight different types of contextual information and the strategies that blind cooks currently use to access the information while preparing meals. Further, we discuss preferences for communicating contextual information about kitchen objects as well as considerations for the deployment of AI-powered assistive technologies.

日本語まとめ
読み込み中…
読み込み中…
Towards Inclusive Source Code Readability Based on the Preferences of Programmers with Visual Impairments
説明

Code readability is crucial for program comprehension, maintenance, and collaboration. However, many of the standards for writing readable code are derived from sighted developers' readability needs. We conducted a qualitative study with 16 blind and visually impaired (BVI) developers to better understand their readability preferences for common code formatting rules such as identifier naming conventions, line length, and the use of indentation. Our findings reveal how BVI developers' preferences contrast with those of sighted developers and how we can expand the existing rules to improve code readability on screen readers. Based on the findings, we contribute an inclusive understanding of code readability and derive implications for programming languages, development environments, and style guides. Our work helps broaden the meaning of readable code in software engineering and accessibility research.

日本語まとめ
読み込み中…
読み込み中…
FetchAid: Making Parcel Lockers More Accessible to Blind and Low Vision People With Deep-learning Enhanced Touchscreen Guidance, Error-Recovery Mechanism, and AR-based Search Support
説明

Parcel lockers have become an increasingly prevalent last-mile delivery method. However, a recent study revealed their accessibility challenges to blind and low-vision (BLV) people. Informed by the study, we designed FetchAid, a standalone intelligent mobile app assisting BLV in using a parcel locker in real-time by integrating computer vision and augmented reality (AR) technologies. FetchAid first uses a deep network to detect the user's fingertip and relevant buttons on the touch screen of the parcel locker to guide the user to reveal and scan the QR code to open the target compartment door and then interactively guide the user to reach the door safely with AR-based context-aware audio feedback. Moreover, FetchAid provides an error-recovery mechanism and real-time feedback to keep the user on track. We evaluated it with 12 BLV people and found that FetchAid substantially improved task accomplishment and efficiency, and reduced frustration and overall effort.

日本語まとめ
読み込み中…
読み込み中…