Technologies to Support Accessibility

会議の名前
CHI 2022
Understanding Interactions for Smart Wheelchair Navigation in Crowds
要旨

Shared control wheelchairs can help users to navigate through crowds by enabling the person to drive the wheelchair while receiving support in avoiding pedestrians. To date, research into shared control has largely overlooked the perspectives of wheelchair users. In this paper, we present two studies that aim to address this gap. The first study involved a series of semi-structured interviews with wheelchair users which highlighted the presence of two different interaction loops, one between the user and the wheelchair and a second one between the user and the crowd. In the second study we engaged with wheelchair users and designers to co-design appropriate feedback loops for future shared control interaction interfaces. Based on the results of the co-design session, we present design implications for shared control wheelchair around the need for empathy, embodiment and social awareness; situational awareness and adaptability; and selective information management.

受賞
Honorable Mention
著者
Bingqing Zhang
University College London, London, United Kingdom
Giulia Barbareschi
University College London, London, United Kingdom
Roxana Ramirez Herrera
University College London, London, United Kingdom
Tom Carlson
University College London, London, United Kingdom
Catherine Holloway
University College London, London, United Kingdom
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502085

動画
Design and Evaluation of Hybrid Search for American Sign Language to English Dictionaries: Making the Most of Imperfect Sign Recognition
要旨

Searching for the meaning of an unfamiliar sign-language word in a dictionary is difficult for learners, but emerging sign-recognition technology will soon enable users to search by submitting a video of themselves performing the word they recall. However, sign-recognition technology is imperfect, and users may need to search through a long list of possible results when seeking a desired result. To speed this search, we present a hybrid-search approach, in which users begin with a video-based query and then filter the search results by linguistic properties, e.g., handshape. We interviewed 32 ASL learners about their preferences for the content and appearance of the search-results page and filtering criteria. A between-subjects experiment with 20 ASL learners revealed that our hybrid search system outperformed a video-based search system along multiple satisfaction and performance metrics. Our findings provide guidance for designers of video-based sign-language dictionary search systems, with implications for other search scenarios.

著者
Saad Hassan
Rochester Institute of Technology, Rochester, New York, United States
Akhter Al Amin
Rochester Institute of Technology, Rochester, New York, United States
Alexis Gordon
Rochester Institute of Technology, Rochester, New York, United States
Sooyeon Lee
Rochester Institute of Technology, Rochester, New York, United States
Matt Huenerfauth
Rochester Institute of Technology, Rochester, New York, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501986

動画
Co11ab: Augmenting Accessibility in Synchronous Collaborative Writing for People with Vision Impairments
要旨

Collaborative writing is an integral part of academic and professional work. Although some prior research has focused on accessibility in collaborative writing, we know little about how visually impaired writers work in real-time with sighted collaborators or how online editing tools could better support their work. Grounded in formative interviews and observations with eight screen reader users, we built Co11ab, a Google Docs extension that provides configurable audio cues to facilitate understanding who is editing (or edited) what and where in a shared document. Results from a design exploration with fifteen screen reader users, including three naturalistic sessions of use with sighted colleagues, reveal how screen reader users understand various auditory representations and use them to coordinate real-time collaborative writing. We revisit what collaboration awareness means for screen reader users and discuss design considerations for future systems.

著者
Maitraye Das
Northwestern University, Evanston, Illinois, United States
Thomas Barlow. McHugh
Northwestern University, Evanston, Illinois, United States
Anne Marie Piper
University of California, Irvine, Irvine, California, United States
Darren Gergle
Northwestern University, Evanston, Illinois, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3501918

動画
Ga11y: an Automated GIF Annotation System for Visually Impaired Users
要旨

Animated GIF images have become prevalent in internet culture, often used to express richer and more nuanced meanings than static images. But animated GIFs often lack adequate alternative text descriptions, and it is challenging to generate such descriptions automatically, resulting in inaccessible GIFs for blind or low-vision (BLV) users. To improve the accessibility of animated GIFs for BLV users, we provide a system called Ga11y (pronounced ``galley''), for creating GIF annotations. Ga11y combines the power of machine intelligence and crowdsourcing and has three components: an Android client for submitting annotation requests, a backend server and database, and a web interface where volunteers can respond to annotation requests. We evaluated three human annotation interfaces and employ the one that yielded the best annotation quality. We also conducted a multi-stage evaluation with 12 BLV participants from the United States and China, receiving positive feedback.

著者
Mingrui Ray. Zhang
University of Washington, Seattle, Washington, United States
Mingyuan Zhong
University of Washington, Seattle, Washington, United States
Jacob O.. Wobbrock
University of Washington, Seattle, Washington, United States
論文URL

https://dl.acm.org/doi/abs/10.1145/3491102.3502092

動画