ImageAssist: Tools for Enhancing Touchscreen-Based Image Exploration Systems for Blind and Low Vision Users

要旨

Blind and low vision (BLV) users often rely on alt text to understand what a digital image is showing. However, recent research has investigated how touch-based image exploration on touchscreens can supplement alt text. Touchscreen-based image exploration systems allow BLV users to deeply understand images while granting a strong sense of agency. Yet, prior work has found that these systems require a lot of effort to use, and little work has been done to explore these systems' bottlenecks on a deeper level and propose solutions to these issues. To address this, we present ImageAssist, a set of three tools that assist BLV users through the process of exploring images by touch — scaffolding the exploration process. We perform a series of studies with BLV users to design and evaluate ImageAssist, and our findings reveal several implications for image exploration tools for BLV users.

著者
Vishnu Nair
Columbia University, New York, New York, United States
Hanxiu 'Hazel' Zhu
Columbia University, New York, New York, United States
Brian A.. Smith
Columbia University, New York, New York, United States
論文URL

https://doi.org/10.1145/3544548.3581302

動画

会議: CHI 2023

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2023.acm.org/)

セッション: Automation and Gesture based interaction

Hall E
6 件の発表
2023-04-26 20:10:00
2023-04-26 21:35:00