Accessibility

会議の名前
UIST 2022
Seeing our Blind Spots: Smart Glasses-based Simulation to Increase Design Students Awareness of Visual Impairment
要旨

As the population ages, many will acquire visual impairments. To improve design for these users, it is essential to build awareness of their perspective during everyday routines, especially for design students. Although several visual impairment simulation toolkits exist in both academia and as commercial products, analog, and static visual impairment simulation tools do not simulate effects concerning the user's eye movements. Meanwhile, VR and video see-through-based AR simulation methods are constrained by smaller fields of view when compared with the natural human visual field and also suffer from vergence-accommodation conflict (VAC) which correlates with visual fatigue, headache, and dizziness. In this paper, we enable an on-the-go, VAC-free, visually impaired experience by leveraging our optical see-through glasses. The FOV of our glasses is approximately 160 degrees for horizontal and 140 degrees for vertical, and participants can experience both losses of central vision and loss of peripheral vision at different severities. Our evaluation (n =14) indicates that the glasses can significantly and effectively reduce visual acuity and visual field without causing typical motion sickness symptoms such as headaches and or visual fatigue. Questionnaires and qualitative feedback also showed how the glasses helped to increase participants’ awareness of visual impairment.

著者
Qing Zhang
Keio University, Yokohama, Japan
Kai Kunze
Keio University, Tokyo, Japan
Giulia Barbareschi
Keio University, Yokohama, Japan
Yun Suen Pai
Keio University Graduate School of Media Design, Yokohama, Japan
Jamie A. Ward
Goldsmiths University of London, London, United Kingdom
Yifei Huang
The University of Tokyo, Tokyo, Japan
Juling Li
Keio University, Yokohama, Japan
論文URL

https://doi.org/10.1145/3526113.3545687

CrossA11y: Identifying Video Accessibility Issues via Cross-modal Grounding
要旨

Authors make their videos visually accessible by adding audio descriptions (AD), and auditorily accessible by adding closed captions (CC). However, creating AD and CC is challenging and tedious, especially for non-professional describers and captioners, due to the difficulty of identifying accessibility problems in videos. A video author will have to watch the video through and manually check for inaccessible information frame-by-frame, for both visual and auditory modalities. In this paper, we present CrossA11y, a system that helps authors efficiently detect and address visual and auditory accessibility issues in videos. Using cross-modal grounding analysis, CrossA11y automatically measures accessibility of visual and audio segments in a video by checking for modality asymmetries. CrossA11y then displays these segments and surfaces visual and audio accessibility issues in a unified interface, making it intuitive to locate, review, script AD/CC in-place, and preview the described and captioned video immediately. We demonstrate the effectiveness of CrossA11y through a lab study with 11 participants, comparing to existing baseline.

受賞
Best Paper
著者
Xingyu "Bruce". Liu
UCLA, Los Angeles, California, United States
Ruolin Wang
UCLA, Los Angeles, California, United States
Dingzeyu Li
Adobe Research, Seattle, Washington, United States
Xiang 'Anthony' Chen
UCLA, Los Angeles, California, United States
Amy Pavel
University of Texas, Austin, Austin, Texas, United States
論文URL

https://doi.org/10.1145/3526113.3545703

Grid-Coding: An Accessible, Efficient, and Structured Coding Paradigm for Blind and Low-Vision Programmers
要旨

Sighted programmers often rely on visual cues (e.g., syntax coloring, keyword highlighting, code formatting) to perform common coding activities in text-based languages (e.g., Python). Unfortunately, blind and low-vision (BLV) programmers hardly benefit from these visual cues because they interact with computers via assistive technologies (e.g., screen readers), which fail to communicate visual semantics meaningfully. Prior work on making text-based programming languages and environments accessible mostly focused on code navigation and, to some extent, code debugging, but not much toward code editing, which is an essential coding activity. We present Grid-Coding to fill this gap. Grid-Coding renders source code in a structured 2D grid, where each row, column, and cell have consistent, meaningful semantics. Its design is grounded on prior work and refined by 28 BLV programmers through online participatory sessions for 2 months. We implemented the Grid-Coding prototype as a spreadsheet-like web application for Python and evaluated it with a study with 12 BLV programmers. This study revealed that, compared to a text editor (i.e., the go-to editor for BLV programmers), our prototype enabled BLV programmers to navigate source code quickly, find the context of a statement easily, detect syntax errors in existing code effectively, and write new code with fewer syntax errors. The study also revealed how BLV programmers adopted Grid-Coding and demonstrated novel interaction patterns conducive to increased programming productivity.

受賞
Best Paper
著者
Md Ehtesham-Ul-Haque
Pennsylvania State University, University Park, Pennsylvania, United States
Syed Mostofa Monsur
Bangladesh University of Engineering and Technology, Dhaka, Bangladesh
Syed Masum Billah
Pennsylvania State University, University Park , Pennsylvania, United States
論文URL

https://doi.org/10.1145/3526113.3545620

Interactive Public Displays and Wheelchair Users: Between Direct, Personal and Indirect, Assisted Interaction
要旨

We examine accessible interactions for wheelchair users and public displays with three studies. In a first study, we conduct a Systematic Literature Review, from which we report very few scientific papers on this topic and a preponderant focus on touch input. In a second study, we conduct a Systematic Video Review using YouTube as a data source, and unveil accessibility challenges for public displays and several input modalities alternative to direct touch. In a third study, we conduct semi-structured interviews with eleven wheelchair users to understand their experience interacting with public displays and to collect their preferences for more accessible input modalities. Based on our findings, we propose the "assisted interaction" phase to extend Vogel and Balakrishnan's four-phase interaction model with public displays, and the "ability" dimension for cross-device interaction design to support, via users' personal mobile devices, independent use of interactive public displays.

著者
Radu-Daniel Vatavu
Ștefan cel Mare University of Suceava, Suceava, Romania
Ovidiu-Ciprian Ungurean
Ștefan cel Mare University of Suceava, Suceava, Romania
Laura-Bianca Bilius
Ștefan cel Mare University of Suceava, Suceava , Romania
論文URL

https://doi.org/10.1145/3526113.3545662

PSST: Enabling Blind or Visually Impaired Developers to Author Sonifications of Streaming Sensor Data
要旨

We present the first toolkit that equips blind and visually impaired (BVI) developers with the tools to create accessible data displays. Called PSST (Physical computing Streaming Sensor data Toolkit), it enables BVI developers to understand the data generated by sensors from a mouse to a micro:bit physical computing platform. By assuming visual abilities, earlier efforts to make physical computing accessible fail to address the need for BVI developers to access sensor data. PSST enables BVI developers to understand real-time, real-world sensor data by providing control over what should be displayed, as well as when to display and how to display sensor data. PSST supports filtering based on raw or calculated values, highlighting, and transformation of data. Output formats include tonal sonification, nonspeech audio files, speech, and SVGs for laser cutting. We validate PSST through a series of demonstrations and a user study with BVI developers.

著者
Venkatesh Potluri
University of Washington, Seattle, Washington, United States
John R. Thompson
Microsoft Research, Redmond, Washington, United States
James Devine
Microsoft Research, Cambridge, Cambridgeshire, United Kingdom
Bongshin Lee
Microsoft Research, Redmond, Washington, United States
Nora Morsi
University of Washington, Seattle, Washington, United States
Peli de Halleux
Microsoft Research, Redmond, Washington, United States
Steve Hodges
Microsoft Research, Cambridge, United Kingdom
Jennifer Mankoff
University of Washington, Seattle, Washington, United States
論文URL

https://doi.org/10.1145/3526113.3545700

TangibleGrid: Tangible Web Layout Design for Blind Users
要旨

We present TangibleGrid, a novel device that allows blind users to understand and design the layout of a web page with real-time tangible feedback. We conducted semi-structured interviews and a series of co-design sessions with blind users to elicit insights that guided the design of TangibleGrid. Our final prototype contains shape-changing brackets representing the web elements and a baseboard representing the web page canvas. Blind users can design a web page layout through creating and editing web elements by snapping or adjusting tangible brackets on top of the baseboard. The baseboard senses the brackets' type, size, and location, verbalizes the information, and renders the web page on the client browser. Through a formative user study, we found that blind users could understand a web page layout through TangibleGrid. They were also able to design a new web layout from scratch without the help of sighted people.

著者
Jiasheng Li
University of Maryland, College Park, Maryland, United States
Zeyu Yan
University Of Maryland, College Park, Maryland, United States
Ebrima Haddy. Jarjue
University of Maryland, College Park, Maryland, United States
Ashrith Shetty
University of Maryland, College Park, College Park, Maryland, United States
Huaishu Peng
University of Maryland, College Park, Maryland, United States
論文URL

https://doi.org/10.1145/3526113.3545627