Exploring Mobile Device Accessibility: Challenges, Insights, and Recommendations for Evaluation Methodologies
説明

With the ubiquitous use of mobile applications, it is paramount that they are accessible, so they can empower all users, including those with different needs. Determining if an app is accessible implies conducting an accessibility evaluation. While accessibility evaluations have been thoroughly studied in the web domain, there are still many open questions when evaluating mobile applications. This paper investigates mobile accessibility evaluation methodologies. We conducted four studies, including an examination of accessibility reports from European Member-states, interviews with accessibility experts, manual evaluations, and usability tests involving users. Our investigations have uncovered significant limitations in current evaluation methods, suggesting that the absence of authoritative guidelines and standards, similar to what exists for the web, but tailored specifically to mobile devices, hampers the effectiveness of accessibility evaluation and monitoring activities. Based on our findings, we present a set of recommendations aimed at improving the evaluation methodologies for assessing mobile applications’ accessibility.

日本語まとめ
読み込み中…
読み込み中…
Human I/O: Towards a Unified Approach to Detecting Situational Impairments
説明

Situationally Induced Impairments and Disabilities (SIIDs) can significantly hinder user experience in contexts such as poor lighting, noise, and multi-tasking. While prior research has introduced algorithms and systems to address these impairments, they predominantly cater to specific tasks or environments and fail to accommodate the diverse and dynamic nature of SIIDs. We introduce Human I/O, a unified approach to detecting a wide range of SIIDs by gauging the availability of human input/output channels. Leveraging egocentric vision, multimodal sensing and reasoning with large language models, Human I/O achieves a 0.22 mean absolute error and a 82% accuracy in availability prediction across 60 in-the-wild egocentric video recordings in 32 different scenarios. Furthermore, while the core focus of our work is on the detection of SIIDs rather than the creation of adaptive user interfaces, we showcase the efficacy of our prototype via a user study with 10 participants. Findings suggest that Human I/O significantly reduces effort and improves user experience in the presence of SIIDs, paving the way for more adaptive and accessible interactive systems in the future.

日本語まとめ
読み込み中…
読み込み中…
AXNav: Replaying Accessibility Tests from Natural Language
説明

Developers and quality assurance testers often rely on manual testing to test accessibility features throughout the product lifecycle. Unfortunately, manual testing can be tedious, often has an overwhelming scope, and can be difficult to schedule amongst other development milestones. Recently, Large Language Models (LLMs) have been used for a variety of tasks including automation of UIs. However, to our knowledge, no one has yet explored the use of LLMs in controlling assistive technologies for the purposes of supporting accessibility testing. In this paper, we explore the requirements of a natural language based accessibility testing workflow, starting with a formative study. From this we build a system that takes a manual accessibility test instruction in natural language (e.g., "Search for a show in VoiceOver") as input and uses an LLM combined with pixel-based UI Understanding models to execute the test and produce a chaptered, navigable video. In each video, to help QA testers, we apply heuristics to detect and flag accessibility issues (e.g., Text size not increasing with Large Text enabled, VoiceOver navigation loops). We evaluate this system through a 10-participant user study with accessibility QA professionals who indicated that the tool would be very useful in their current work and performed tests similarly to how they would manually test the features. The study also reveals insights for future work on using LLMs for accessibility testing.

日本語まとめ
読み込み中…
読み込み中…
AccessLens: Auto-detecting Inaccessibility of Everyday Objects
説明

In our increasingly diverse society, everyday physical interfaces often present barriers, impacting individuals across various contexts. This oversight, from small cabinet knobs to identical wall switches that can pose different contextual challenges, highlights an imperative need for solutions. Leveraging low-cost 3D-printed augmentations such as knob magnifiers and tactile labels seems promising, yet the process of discovering unrecognized barriers remains challenging because disability is context-dependent. We introduce AccessLens, an end-to-end system designed to identify inaccessible interfaces in daily objects, and recommend 3D-printable augmentations for accessibility enhancement. Our approach involves training a detector using the novel AccessDB dataset designed to automatically recognize 21 distinct Inaccessibility Classes (e.g., bar-small and round-rotate) within 6 common object categories (e.g., handle and knob). AccessMeta serves as a robust way to build a comprehensive dictionary linking these accessibility classes to open-source 3D augmentation designs. Experiments demonstrate our detector's performance in detecting inaccessible objects.

日本語まとめ
読み込み中…
読み込み中…
A Systematic Review of Ability-diverse Collaboration through Ability-based Lens in HCI
説明

In a world where diversity is increasingly recognised and celebrated, it is important for HCI to embrace the evolving methods and theories for technologies to reflect the diversity of its users and be ability-centric. Interdependence Theory, an example of this evolution, highlights the interpersonal relationships between humans and technologies and how technologies should be designed to meet shared goals and outcomes for people, regardless of their abilities. This necessitates a contemporary understanding of "ability-diverse collaboration," which motivated this review. In this review, we offer an analysis of 117 papers sourced from the ACM Digital Library spanning the last two decades. We contribute (1) a unified taxonomy and the Ability-Diverse Collaboration Framework, (2) a reflective discussion and mapping of the current design space, and (3) future research opportunities and challenges. Finally, we have released our data and analysis tool to encourage the HCI research community to contribute to this ongoing effort.

日本語まとめ
読み込み中…
読み込み中…