Interactive descriptions & wayfinding

Paper session

会議の名前
CHI 2020
Storytelling to Sensemaking: A Systematic Framework for Designing Auditory Description Display for Interactives
要旨

Auditory description display is verbalized text typically used to describe live, recorded, or graphical displays to support access for people who are blind or visually impaired. Significant prior research has resulted in guidelines for auditory description for non-interactive or minimally interactive contexts. A lack of auditory description for complex interactive environments remains a tremendous barrier to access for people with visual impairments. In this work, we present a systematic design framework for designing auditory description within complex interactive environments. We illustrate how modular descriptions aligned with this framework can result in an interactive storytelling experience constructed through user interactions. This framework has been used in a set of published and widely used interactive science simulations, and in its generalized form could be applied to a variety of contexts.

キーワード
Auditory description display
Description design
Non-visual access
Interactive information spaces
著者
Taliesin L. Smith
University of Colorado Boulder, Boulder, CO, USA
Emily B. Moore
University of Colorado Boulder, Boulder, CO, USA
DOI

10.1145/3313831.3376460

論文URL

https://doi.org/10.1145/3313831.3376460

動画
"Person, Shoes, Tree. Is the Person Naked?" What People with Vision Impairments Want in Image Descriptions
要旨

Access to digital images is important to people who are blind or have low vision (BLV). Many contemporary image description efforts do not take into account this population's nuanced image description preferences. In this paper, we present a qualitative study that provides insight into 28 BLV people's experiences with descriptions of digital images from news websites, social networking sites/platforms, eCommerce websites, employment websites, online dating websites/platforms, productivity applications, and e-publications. Our findings reveal how image description preferences vary based on the source where digital images are encountered and the surrounding context. We provide recommendations for the development of next-generation image description technologies inspired by our empirical analysis.

キーワード
Image captions
alt text
accessibility
visual impairment
著者
Abigale Stangl
University of Texas at Austin, Austin, TX, USA
Meredith Ringel Morris
Microsoft Research, Redmond, WA, USA
Danna Gurari
University of Texas at Austin, Austin, TX, USA
DOI

10.1145/3313831.3376404

論文URL

https://doi.org/10.1145/3313831.3376404

ReCog: Supporting Blind People in Recognizing Personal Objects
要旨

We present ReCog, a mobile app that enables blind users to recognize objects by training a deep network with their own photos of such objects. This functionality is useful to differentiate personal objects, which cannot be recognized with pre-trained recognizers and may lack distinguishing tactile features. To ensure that the objects are well-framed in the captured photos, ReCog integrates a camera-aiming guidance that tracks target objects and instructs the user through verbal and sonification feedback to appropriately frame them.<br>We report a two-session study with 10 blind participants using ReCog for object training and recognition, with and without guidance. We show that ReCog enables blind users to train and recognize their personal objects, and that camera-aiming guidance helps novice users to increase their confidence, achieve better accuracy, and learn strategies to capture better photos.

キーワード
Visual impairment
object recognition
photography guidance
著者
Dragan Ahmetovic
Università degli studi di Milano, Milano, Italy
Daisuke Sato
Carnegie Mellon University & IBM, Pittsburgh, PA, USA
Uran Oh
Ewha Womans University, Seoul, South Korea
Tatsuya Ishihara
IBM Research - Tokyo, Tokyo, Japan
Kris Kitani
Carnegie Mellon University, Pittsburgh, PA, USA
Chieko Asakawa
Carnegie Mellon University & IBM, Pittsburgh, PA, USA
DOI

10.1145/3313831.3376143

論文URL

https://doi.org/10.1145/3313831.3376143

The Effectiveness of Visual and Audio Wayfinding Guidance on Smartglasses for People with Low Vision
要旨

Wayfinding is a critical but challenging task for people who have low vision, a visual impairment that falls short of blindness. Prior wayfinding systems for people with visual impairments focused on blind people, providing only audio and tactile feedback. Since people with low vision use their remaining vision, we sought to determine how audio feedback compares to visual feedback in a wayfinding task. We developed visual and audio wayfinding guidance on smartglasses based on de facto standard approaches for blind and sighted people and conducted a study with 16 low vision participants. We found that participants made fewer mistakes and experienced lower cognitive load with visual feedback. Moreover, participants with a full field of view completed the wayfinding tasks faster when using visual feedback. However, many participants preferred audio feedback because of its shorter learning curve. We propose design guidelines for wayfinding systems for low vision.

キーワード
Accessibility
augmented reality
low vision
visual feedback
audio feedback
wayfinding
著者
Yuhang Zhao
Cornell University, New York, NY, USA
Elizabeth Kupferstein
Cornell University, New York, NY, USA
Hathaitorn Rojnirun
Cornell University, New York, NY, USA
Leah Findlater
University of Washington, Seattle, WA, USA
Shiri Azenkot
Cornell University, New York, NY, USA
DOI

10.1145/3313831.3376516

論文URL

https://doi.org/10.1145/3313831.3376516

Towards More Universal Wayfinding Technologies: Navigation Preferences Across Disabilities
要旨

Accessibility researchers have been studying wayfinding technologies for people with disabilities for decades, typically focusing on solutions within disability populations — for example, technologies to support blind navigation. Yet, we know little about wayfinding needs across disabilities. In this paper, we describe a qualitative interview study examining the urban navigational experiences of 27 people who identified as older adults and/or who had cognitive, visual, hearing, and/or mobility disabilities. We found that many navigation route preferences were shared across disabilities (e.g., desire to avoid carpeted areas), while others diverged or were in tension (e.g., the need to avoid noisy areas while staying near main thoroughfares). To support design for multiple disability groups, we identify four dimensions of navigation preferences — technology, route, assistance, experience — and describe how these might usefully inform design of more universally usable wayfinding technologies.

キーワード
Accessibility
Navigation
Visual Impairment
Mobility Impairment
Cognitive Impairment
Deaf
Older Adults
著者
Maya Gupta
University of California, Irvine, Irvine, CA, USA
Ali Abdolrahmani
University of Maryland, Baltimore County, Baltimore, MD, USA
Emory Edwards
University of California, Irvine, Irvine, CA, USA
Mayra Cortez
University of California, Irvine, Irvine, CA, USA
Andrew Tumang
University of California, Irvine, Irvine, CA, USA
Yasmin Majali
University of Maryland, Baltimore County, Baltimore, MD, USA
Marc Lazaga
University of Maryland, Baltimore County, Baltimore, MD, USA
Samhitha Tarra
University of California, Irvine, Irvine, CA, USA
Prasad Patil
University of Maryland, Baltimore County, Baltimore, MD, USA
Ravi Kuber
University of Maryland, Baltimore County, Baltimore, MD, USA
Stacy M Branham
University of California, Irvine, Irvine, CA, USA
DOI

10.1145/3313831.3376581

論文URL

https://doi.org/10.1145/3313831.3376581