Universal Accessibility A

会議の名前
CHI 2024
Exploring Mobile Device Accessibility: Challenges, Insights, and Recommendations for Evaluation Methodologies
要旨

With the ubiquitous use of mobile applications, it is paramount that they are accessible, so they can empower all users, including those with different needs. Determining if an app is accessible implies conducting an accessibility evaluation. While accessibility evaluations have been thoroughly studied in the web domain, there are still many open questions when evaluating mobile applications. This paper investigates mobile accessibility evaluation methodologies. We conducted four studies, including an examination of accessibility reports from European Member-states, interviews with accessibility experts, manual evaluations, and usability tests involving users. Our investigations have uncovered significant limitations in current evaluation methods, suggesting that the absence of authoritative guidelines and standards, similar to what exists for the web, but tailored specifically to mobile devices, hampers the effectiveness of accessibility evaluation and monitoring activities. Based on our findings, we present a set of recommendations aimed at improving the evaluation methodologies for assessing mobile applications’ accessibility.

著者
Letícia Seixas Pereira
University of Lisbon, Lisbon, Portugal
Maria Matos
University of Lisbon, Lisbon, Portugal
Carlos Duarte
Faculdade de Ciências da Universidade de Lisboa, Lisboa, Portugal
論文URL

https://doi.org/10.1145/3613904.3642526

動画
Human I/O: Towards a Unified Approach to Detecting Situational Impairments
要旨

Situationally Induced Impairments and Disabilities (SIIDs) can significantly hinder user experience in contexts such as poor lighting, noise, and multi-tasking. While prior research has introduced algorithms and systems to address these impairments, they predominantly cater to specific tasks or environments and fail to accommodate the diverse and dynamic nature of SIIDs. We introduce Human I/O, a unified approach to detecting a wide range of SIIDs by gauging the availability of human input/output channels. Leveraging egocentric vision, multimodal sensing and reasoning with large language models, Human I/O achieves a 0.22 mean absolute error and a 82% accuracy in availability prediction across 60 in-the-wild egocentric video recordings in 32 different scenarios. Furthermore, while the core focus of our work is on the detection of SIIDs rather than the creation of adaptive user interfaces, we showcase the efficacy of our prototype via a user study with 10 participants. Findings suggest that Human I/O significantly reduces effort and improves user experience in the presence of SIIDs, paving the way for more adaptive and accessible interactive systems in the future.

受賞
Honorable Mention
著者
Xingyu Bruce. Liu
UCLA, Los Angeles, California, United States
Jiahao Nick. Li
UCLA, Los Angeles, California, United States
David Kim
Google, Zurich, Switzerland
Xiang 'Anthony' Chen
UCLA, Los Angeles, California, United States
Ruofei Du
Google, San Francisco, California, United States
論文URL

https://doi.org/10.1145/3613904.3642065

動画
AXNav: Replaying Accessibility Tests from Natural Language
要旨

Developers and quality assurance testers often rely on manual testing to test accessibility features throughout the product lifecycle. Unfortunately, manual testing can be tedious, often has an overwhelming scope, and can be difficult to schedule amongst other development milestones. Recently, Large Language Models (LLMs) have been used for a variety of tasks including automation of UIs. However, to our knowledge, no one has yet explored the use of LLMs in controlling assistive technologies for the purposes of supporting accessibility testing. In this paper, we explore the requirements of a natural language based accessibility testing workflow, starting with a formative study. From this we build a system that takes a manual accessibility test instruction in natural language (e.g., "Search for a show in VoiceOver") as input and uses an LLM combined with pixel-based UI Understanding models to execute the test and produce a chaptered, navigable video. In each video, to help QA testers, we apply heuristics to detect and flag accessibility issues (e.g., Text size not increasing with Large Text enabled, VoiceOver navigation loops). We evaluate this system through a 10-participant user study with accessibility QA professionals who indicated that the tool would be very useful in their current work and performed tests similarly to how they would manually test the features. The study also reveals insights for future work on using LLMs for accessibility testing.

著者
Maryam Taeb
Florida State University, Tallahassee, Florida, United States
Amanda Swearngin
Apple, Seattle, Washington, United States
Eldon Schoop
Apple Inc, Seattle, Washington, United States
Ruijia Cheng
Apple Inc, Seattle, Washington, United States
Yue Jiang
Aalto University, Espoo, Finland
Jeffrey Nichols
Apple Inc, San Diego, California, United States
論文URL

https://doi.org/10.1145/3613904.3642777

動画
AccessLens: Auto-detecting Inaccessibility of Everyday Objects
要旨

In our increasingly diverse society, everyday physical interfaces often present barriers, impacting individuals across various contexts. This oversight, from small cabinet knobs to identical wall switches that can pose different contextual challenges, highlights an imperative need for solutions. Leveraging low-cost 3D-printed augmentations such as knob magnifiers and tactile labels seems promising, yet the process of discovering unrecognized barriers remains challenging because disability is context-dependent. We introduce AccessLens, an end-to-end system designed to identify inaccessible interfaces in daily objects, and recommend 3D-printable augmentations for accessibility enhancement. Our approach involves training a detector using the novel AccessDB dataset designed to automatically recognize 21 distinct Inaccessibility Classes (e.g., bar-small and round-rotate) within 6 common object categories (e.g., handle and knob). AccessMeta serves as a robust way to build a comprehensive dictionary linking these accessibility classes to open-source 3D augmentation designs. Experiments demonstrate our detector's performance in detecting inaccessible objects.

著者
Nahyun Kwon
Texas A&M University, College Station, Texas, United States
Qian Lu
Texas A&M University, College Station, Texas, United States
Muhammad Hasham Qazi
Texas A&M University, College Station, Texas, United States
Joanne Liu
Texas A&M University, College Station, Texas, United States
Changhoon Oh
Yonsei University, Seoul, Korea, Republic of
Shu Kong
Texas A&M University, College Station, Texas, United States
Jeeeun Kim
Texas A&M University, College Station, Texas, United States
論文URL

https://doi.org/10.1145/3613904.3642767

動画
A Systematic Review of Ability-diverse Collaboration through Ability-based Lens in HCI
要旨

In a world where diversity is increasingly recognised and celebrated, it is important for HCI to embrace the evolving methods and theories for technologies to reflect the diversity of its users and be ability-centric. Interdependence Theory, an example of this evolution, highlights the interpersonal relationships between humans and technologies and how technologies should be designed to meet shared goals and outcomes for people, regardless of their abilities. This necessitates a contemporary understanding of "ability-diverse collaboration," which motivated this review. In this review, we offer an analysis of 117 papers sourced from the ACM Digital Library spanning the last two decades. We contribute (1) a unified taxonomy and the Ability-Diverse Collaboration Framework, (2) a reflective discussion and mapping of the current design space, and (3) future research opportunities and challenges. Finally, we have released our data and analysis tool to encourage the HCI research community to contribute to this ongoing effort.

受賞
Honorable Mention
著者
Lan Xiao
University College London, London, London, United Kingdom
Maryam Bandukda
University College London, London, United Kingdom
Katrin Angerbauer
University of Stuttgart, Stuttgart, Germany
Weiyue Lin
Peking University, Beijing, China
Tigmanshu Bhatnagar
University College London, London, United Kingdom
Michael Sedlmair
University of Stuttgart, Stuttgart, Germany
Catherine Holloway
University College London, London, United Kingdom
論文URL

https://doi.org/10.1145/3613904.3641930

動画