Storytelling to Sensemaking: A Systematic Framework for Designing Auditory Description Display for Interactives
説明

Auditory description display is verbalized text typically used to describe live, recorded, or graphical displays to support access for people who are blind or visually impaired. Significant prior research has resulted in guidelines for auditory description for non-interactive or minimally interactive contexts. A lack of auditory description for complex interactive environments remains a tremendous barrier to access for people with visual impairments. In this work, we present a systematic design framework for designing auditory description within complex interactive environments. We illustrate how modular descriptions aligned with this framework can result in an interactive storytelling experience constructed through user interactions. This framework has been used in a set of published and widely used interactive science simulations, and in its generalized form could be applied to a variety of contexts.

日本語まとめ
読み込み中…
読み込み中…
"Person, Shoes, Tree. Is the Person Naked?" What People with Vision Impairments Want in Image Descriptions
説明

Access to digital images is important to people who are blind or have low vision (BLV). Many contemporary image description efforts do not take into account this population's nuanced image description preferences. In this paper, we present a qualitative study that provides insight into 28 BLV people's experiences with descriptions of digital images from news websites, social networking sites/platforms, eCommerce websites, employment websites, online dating websites/platforms, productivity applications, and e-publications. Our findings reveal how image description preferences vary based on the source where digital images are encountered and the surrounding context. We provide recommendations for the development of next-generation image description technologies inspired by our empirical analysis.

日本語まとめ
読み込み中…
読み込み中…
ReCog: Supporting Blind People in Recognizing Personal Objects
説明

We present ReCog, a mobile app that enables blind users to recognize objects by training a deep network with their own photos of such objects. This functionality is useful to differentiate personal objects, which cannot be recognized with pre-trained recognizers and may lack distinguishing tactile features. To ensure that the objects are well-framed in the captured photos, ReCog integrates a camera-aiming guidance that tracks target objects and instructs the user through verbal and sonification feedback to appropriately frame them.<br>We report a two-session study with 10 blind participants using ReCog for object training and recognition, with and without guidance. We show that ReCog enables blind users to train and recognize their personal objects, and that camera-aiming guidance helps novice users to increase their confidence, achieve better accuracy, and learn strategies to capture better photos.

日本語まとめ
読み込み中…
読み込み中…
The Effectiveness of Visual and Audio Wayfinding Guidance on Smartglasses for People with Low Vision
説明

Wayfinding is a critical but challenging task for people who have low vision, a visual impairment that falls short of blindness. Prior wayfinding systems for people with visual impairments focused on blind people, providing only audio and tactile feedback. Since people with low vision use their remaining vision, we sought to determine how audio feedback compares to visual feedback in a wayfinding task. We developed visual and audio wayfinding guidance on smartglasses based on de facto standard approaches for blind and sighted people and conducted a study with 16 low vision participants. We found that participants made fewer mistakes and experienced lower cognitive load with visual feedback. Moreover, participants with a full field of view completed the wayfinding tasks faster when using visual feedback. However, many participants preferred audio feedback because of its shorter learning curve. We propose design guidelines for wayfinding systems for low vision.

日本語まとめ
読み込み中…
読み込み中…
Towards More Universal Wayfinding Technologies: Navigation Preferences Across Disabilities
説明

Accessibility researchers have been studying wayfinding technologies for people with disabilities for decades, typically focusing on solutions within disability populations — for example, technologies to support blind navigation. Yet, we know little about wayfinding needs across disabilities. In this paper, we describe a qualitative interview study examining the urban navigational experiences of 27 people who identified as older adults and/or who had cognitive, visual, hearing, and/or mobility disabilities. We found that many navigation route preferences were shared across disabilities (e.g., desire to avoid carpeted areas), while others diverged or were in tension (e.g., the need to avoid noisy areas while staying near main thoroughfares). To support design for multiple disability groups, we identify four dimensions of navigation preferences — technology, route, assistance, experience — and describe how these might usefully inform design of more universally usable wayfinding technologies.

日本語まとめ
読み込み中…
読み込み中…