Accessible Content Creation

[A] Paper Room 01, 2021-05-10 17:00:00~2021-05-10 19:00:00 / [B] Paper Room 01, 2021-05-11 01:00:00~2021-05-11 03:00:00 / [C] Paper Room 01, 2021-05-11 09:00:00~2021-05-11 11:00:00

会議の名前
CHI 2021
Understanding Blind Screen-Reader Users' Experiences of Digital Artboards
要旨

Two-dimensional canvases are the core components of many digital productivity and creativity tools, with "artboards" containing objects rather than pixels. Unfortunately, the contents of artboards remain largely inaccessible to blind users relying on screen-readers, but the precise problems are not well understood. This study sought to understand how blind screen-reader users interact with artboards. Specifically, we conducted contextual interviews, observations, and task-based usability studies with 15 blind participants to understand their experiences of artboards found in Microsoft PowerPoint, Apple Keynote, and Google Slides. Participants expressed that the inaccessibility of these artboards contributes to significant educational and professional barriers. We found that the key problems faced were: (1) high cognitive loads from a lack of feedback about artboard contents and object state; (2) difficulty determining relationships among artboard objects; and (3) constant uncertainty about whether object manipulations were successful. We offer design remedies that improve feedback for object state, relationships, and manipulations.

著者
Anastasia Schaadhardt
University of Washington, Seattle, Washington, United States
Alexis Hiniker
University of Washington, Seattle, Washington, United States
Jacob O.. Wobbrock
University of Washington, Seattle, Washington, United States
DOI

10.1145/3411764.3445242

論文URL

https://doi.org/10.1145/3411764.3445242

動画
ASL Sea Battle: Gamifying Sign Language Data Collection
要旨

The development of accurate machine learning models for sign languages like American Sign Language (ASL) has the potential to break down communication barriers for deaf signers. However, to date, no such models have been robust enough for real-world use. The primary barrier to enabling real-world applications is the lack of appropriate training data. Existing training sets suffer from several shortcomings: small size, limited signer diversity, lack of real-world settings, and missing or inaccurate labels. In this work, we present ASL Sea Battle, a sign language game designed to collect datasets that overcome these barriers, while also providing fun and education to users. We conduct a user study to explore the data quality that the game collects, and the user experience of playing the game. Our results suggest that ASL Sea Battle can reliably collect and label real-world sign language videos, and provides fun and education at the expense of data throughput.

受賞
Honorable Mention
著者
Danielle Bragg
Microsoft Research, Cambridge, Massachusetts, United States
Naomi Caselli
Boston University, Boston, Massachusetts, United States
John W. Gallagher
Northeastern University, Boston, Massachusetts, United States
Miriam Goldberg
Boston University, Boston, Massachusetts, United States
Courtney J. Oka
Microsoft, Cambridge, Massachusetts, United States
William Thies
Microsoft Research, Cambridge, Massachusetts, United States
DOI

10.1145/3411764.3445416

論文URL

https://doi.org/10.1145/3411764.3445416

動画
What Makes Videos Accessible to Blind and Visually Impaired People?
要旨

Videos on sites like YouTube have become a primary source for information online. User-generated videos almost universally lack audio descriptions, making most videos inaccessible to blind and visually impaired (BVI) consumers. Our formative studies with BVI people revealed that they used a time-consuming trial-and-error approach when searching for videos: clicking on a video, watching a portion, leaving the video, and repeating the process to find videos that would be accessible — or understandable without additional description of the visual content. BVI people also reported video accessibility heuristics that characterize accessible and inaccessible videos. We instantiate 7 of the identified heuristics (2 audio-related, 2 video-related, and 3 audio-visual) as automated metrics to assess video accessibility. Our automated video accessibility metrics correlate with BVI people’s perception of video accessibility (Adjusted R-squared = 0.642). We augment a video search interface with our video accessibility metrics and find that our system improves BVI peoples’ efficiency in finding accessible videos. With accessibility metrics, participants found videos 40% faster and clicked 54% less videos in our user study. By integrating video accessibility metrics, video hosting platforms could help people surface accessible videos and encourage content creators to author more accessible products, improving video accessibility for all.

著者
Xingyu Liu
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Patrick Carrington
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Xiang 'Anthony' Chen
UCLA, Los Angeles, California, United States
Amy Pavel
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3411764.3445233

論文URL

https://doi.org/10.1145/3411764.3445233

動画
Detecting and Defending Against Seizure-Inducing GIFs in Social Media
要旨

Despite recent improvements in online accessibility, the Internet remains an inhospitable place for users with photosensitive epilepsy, a chronic condition in which certain light stimuli can trigger seizures and even lead to death. In this paper, we explore how current risk detection systems have allowed attackers to take advantage of design oversights and target vulnerable users with photosensitivity on popular social media platforms. Through interviews with photosensitive individuals and a critical review of existing systems, we construct design requirements for consumer-driven protective systems and developed a prototype browser extension for actively detecting and disarming potentially seizure-inducing GIFs. We validate our system with a comprehensive dataset of simulated and collected GIFs. Finally, we conduct a novel quantitative analysis of the prevalence of seizure-inducing GIFs across popular social media platforms and contribute recommendations for improving online accessibility for individuals with photosensitivity. All study materials are available at https://osf.io/5a3dy/.

受賞
Honorable Mention
著者
Laura South
Northeastern University, Boston, Massachusetts, United States
David Saffo
Northeastern University, Boston, Massachusetts, United States
Michelle A.. Borkin
Northeastern University, Boston, Massachusetts, United States
DOI

10.1145/3411764.3445510

論文URL

https://doi.org/10.1145/3411764.3445510

動画
Latte: Use-Case and Assistive-Service Driven Automated Accessibility Testing Framework for Android
要旨

For 15% of the world population with disabilities, accessibility is arguably the most critical software quality attribute. The ever-growing reliance of users with disability on mobile apps further underscores the need for accessible software in this domain. Existing automated accessibility assessment techniques primarily aim to detect violations of predefined guidelines, thereby produce a massive amount of accessibility warnings that often overlook the way software is actually used by users with disability. This paper presents a novel, high-fidelity form of accessibility testing for Android apps, called Latte, that automatically reuses tests written to evaluate an app's functional correctness to assess its accessibility as well. Latte first extracts the use case corresponding to each test, and then executes each use case in the way disabled users would, i.e., using assistive services. Our empirical evaluation on real-world Android apps demonstrates Latte's effectiveness in detecting substantially more useful defects than prior techniques.

著者
Navid Salehnamadi
University of California, Irvine, Irvine, California, United States
Abdulaziz Alshayban
University of California, Irvine, Irvine, California, United States
Jun-Wei Lin
University of California, Irvine, Irvine, California, United States
Iftekhar Ahmed
University Of California, Irvine, Irvine, California, United States
Stacy Branham
University of California, Irvine, Irvine, California, United States
Sam Malek
University of California Irvine, Irvine, California, United States
DOI

10.1145/3411764.3445455

論文URL

https://doi.org/10.1145/3411764.3445455

動画
Say It All: Feedback for Improving Non-Visual Presentation Accessibility
要旨

Presenters commonly use slides as visual aids for informative talks.When presenters fail to verbally describe the content on their slides,blind and visually impaired audience members lose access to necessary content, making the presentation difficult to follow. Our analysis of 90 existing presentation videos revealed that 72% of 610 visual elements (e.g., images, text) were insufficiently described. To help presenters create accessible presentations, we introduce Presentation A11y, a system that provides real-time and post-presentation accessibility feedback. Our system analyzes visual elements on the slide and the transcript of the verbal presentation to provide element-level feedback on what visual content needs to be further described or even removed. Presenters using our system with their own slide-based presentations described more of the content on their slides, and identified 3.26 times more accessibility problems to fix after the talk than when using a traditional slide-based presentation interface. Integrating accessibility feedback into content creation tools will improve the accessibility of informational content for all.

著者
Yi-Hao Peng
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
JiWoong Jang
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Jeffrey P. Bigham
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Amy Pavel
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3411764.3445572

論文URL

https://doi.org/10.1145/3411764.3445572

動画
Toward Automatic Audio Description Generation for Accessible Videos
要旨

Video accessibility is essential for people with visual impairments. Audio descriptions describe what is happening on-screen, e.g., physical actions, facial expressions, and scene changes. Generating high-quality audio descriptions requires a lot of manual description generation. To address this accessibility obstacle, we built a system that analyzes the audiovisual contents of a video and generates the audio descriptions. The system consisted of three modules: AD insertion time prediction, AD generation, and AD optimization. We evaluated the quality of our system on five types of videos by conducting qualitative studies with 20 sighted users and 12 users who were blind or visually impaired. Our findings revealed how audio description preferences varied with user types and video types. Based on our study's analysis, we provided recommendations for the development of future audio description generation technologies.

著者
Yujia Wang
Beijing Institute of Technology, Beijing, China
Wei Liang
Beijing Institute of Technology, Beijing, China
Haikun Huang
George Mason University, Fairfax, Virginia, United States
Yongqi Zhang
University of Massachusetts Boston, Boston, Massachusetts, United States
Dingzeyu Li
Adobe Research, Seattle, Washington, United States
Lap-Fai Yu
George Mason University, Fairfax, Virginia, United States
DOI

10.1145/3411764.3445347

論文URL

https://doi.org/10.1145/3411764.3445347

動画
Technology Developments in Touch-Based Accessible Graphics: A Systematic Review of Research 2010-2020
要旨

This paper presents a systematic literature review of 292 publications from 97 unique venues on touch-based graphics for people who are blind or have low vision, from 2010 to mid-2020. It is the first review of its kind on touch-based accessible graphics. It is timely because it allows us to assess the impact of new technologies such as commodity 3D printing and low-cost electronics on the production and presentation of accessible graphics. As expected our review shows an increase in publications from 2014 that we can attribute to these developments. It also reveals the need to: broaden application areas, especially to the workplace; broaden end-user participation throughout the full design process; and conduct more in situ evaluation. This work is linked to an online living resource to be shared with the wider community.

著者
Matthew Butler
Monash University, Melbourne, Australia
Leona M. Holloway
Monash University, Melbourne, VIC, Australia
Samuel Reinders
Monash University, Melbourne, Victoria, Australia
Cagatay Goncu
Monash University, Melbourne, Victoria, Australia
Kim Marriott
Monash University, Melbourne, Australia
DOI

10.1145/3411764.3445207

論文URL

https://doi.org/10.1145/3411764.3445207

動画
Comparison of Methods for Evaluating Complexity of Simplified Texts among Deaf and Hard-of-Hearing Adults at Different Literacy Levels
要旨

Research has explored using Automatic Text Simplification for reading assistance, with prior work identifying benefits and interestsfrom Deaf and Hard-of-Hearing (DHH) adults. While the evaluation of these technologies remains a crucial aspect of research inthe area, researchers lack guidance in terms of how to evaluate text complexity with DHH readers. Thus, in this work we conductmethodological research to evaluate metrics identified from prior work (including reading speed, comprehension questions, andsubjective judgements of understandability and readability) in terms of their effectiveness for evaluating texts modified to be atvarious complexity levels with DHH adults at different literacy levels. Subjective metrics and low-linguistic-complexity comprehensionquestions distinguished certain text complexity levels with participants with lower literacy. Among participants with higher literacy,only subjective judgements of text readability distinguished certain text complexity levels. For all metrics, participants with higherliteracy scored higher or provided more positive subjective judgements overall.

著者
Oliver Alonzo
Rochester Institute of Technology, Rochester, New York, United States
Jessica Trussell
Rochester Institute of Technology, Rochester, New York, United States
Becca Dingman
Rochester Institute of Technology, Rochester, New York, United States
Matt Huenerfauth
Rochester Institute of Technology, Rochester, New York, United States
DOI

10.1145/3411764.3445038

論文URL

https://doi.org/10.1145/3411764.3445038

動画
Screen Recognition: Creating Accessibility Metadata for Mobile Applications from Pixels
要旨

Many accessibility features available on mobile platforms require applications (apps) to provide complete and accurate metadata describing user interface (UI) components. Unfortunately, many apps do not provide sufficient metadata for accessibility features to work as expected. In this paper, we explore inferring accessibility metadata for mobile apps from their pixels, as the visual interfaces often best reflect an app's full functionality. We trained a robust, fast, memory-efficient, on-device model to detect UI elements using a dataset of 77,637 screens (from 4,068 iPhone apps) that we collected and annotated. To further improve UI detections and add semantic information, we introduced heuristics (e.g., UI grouping and ordering) and additional models (e.g., recognize UI content, state, interactivity). We built Screen Recognition to generate accessibility metadata to augment iOS VoiceOver. In a study with 9 screen reader users, we validated that our approach improves the accessibility of existing mobile apps, enabling even previously inaccessible apps to be used.

受賞
Best Paper
著者
Xiaoyi Zhang
Apple Inc, Seattle, Washington, United States
Lilian de Greef
Apple Inc, Seattle, Washington, United States
Amanda Swearngin
Apple Inc, Seattle, Washington, United States
Samuel White
Apple Inc, Pittsburgh, Pennsylvania, United States
Kyle Murray
Apple Inc, Pittsburgh, Pennsylvania, United States
Lisa Yu
Apple Inc, Pittsburgh, Pennsylvania, United States
Qi Shan
Apple Inc, Seattle, Washington, United States
Jeffrey Nichols
Apple Inc, San Diego, California, United States
Jason Wu
Apple Inc, Pittsburgh, Pennsylvania, United States
Chris Fleizach
Apple Inc, Cupertino, California, United States
Aaron Everitt
Apple Inc, Cupertino, California, United States
Jeffrey P. Bigham
Apple Inc, Pittsburgh, Pennsylvania, United States
DOI

10.1145/3411764.3445186

論文URL

https://doi.org/10.1145/3411764.3445186

動画
TapeBlocks: A Making Toolkit for People Living with Intellectual Disabilities
要旨

The accessibility and affordability of tangible electronic toolkits are significant barriers to their uptake by people with disabilities. We present the design and evaluation of TapeBlocks, a low-cost, low-fidelity toolkit intended to be accessible for people with intellectual disabilities while promoting creativity and engagement. We evaluated TapeBlocks by interviewing makers, special educational needs teachers and support coaches. Analysis of these interviews informed the design of a series of maker workshops using TapeBlocks with young adults living with intellectual disabilities, led by support coaches with support from the research team. Participants were able to engage with TapeBlocks and making, eventually building their own TapeBlocks to make personal creations. Our evaluation reveals how TapeBlocks supports accessible making and playful discovery of electronics for people living with disabilities, and addresses a gap in existing toolkits by being tinkerable, affordable and having a low threshold for engagement.

著者
Kirsten Ellis
Monash University, Melbourne, Vic, Australia
Emily Dao
Monash University, Melbourne, Victoria, Australia
Osian Smith
Swansea University, Swansea, United Kingdom
Stephen Lindsay
Swansea University, Swansea, United Kingdom
Patrick Olivier
Monash University, Melbourne, Victoria, Australia
DOI

10.1145/3411764.3445647

論文URL

https://doi.org/10.1145/3411764.3445647

動画