Universal Accessibility B

会議の名前
CHI 2024
RASSAR: Room Accessibility and Safety Scanning in Augmented Reality
要旨

The safety and accessibility of our homes are critical and evolve as we age, become ill, host guests, or experience life events such as having children. Researchers and health professionals have created assessment instruments such as checklists that enable homeowners and trained experts to identify and mitigate safety and access issues. With advances in computer vision, augmented reality (AR), and mobile sensors, new approaches are now possible. We introduce RASSAR, a mobile AR application for semi-automatically identifying, localizing, and visualizing indoor accessibility and safety issues such as an inaccessible table height or unsafe loose rugs using LiDAR and real-time computer vision. We present findings from three studies: a formative study with 18 participants across five stakeholder groups to inform the design of RASSAR, a technical performance evaluation across ten homes demonstrating state-of-the-art performance, and a user study with six stakeholders. We close with a discussion of future AI-based indoor accessibility assessment tools, RASSAR's extensibility, and key application scenarios.

著者
Xia Su
University of Washington, Seattle, Washington, United States
Kaiming Cheng
University of Washington, Seattle, Washington, United States
Han Zhang
University of Washington, Seattle, Washington, United States
Jaewook Lee
University of Washington, Seattle, Washington, United States
Qiaochu LIU
Tsinghua University, Beijing, China
Wyatt Olson
University of Washington, Seattle, Washington, United States
Jon E.. Froehlich
University of Washington, Seattle, Washington, United States
論文URL

doi.org/10.1145/3613904.3642140

動画
A Design Space for Vision Augmentations and Augmented Human Perception using Digital Eyewear
要旨

Head-mounted displays were originally introduced to directly present computer-generated information to the human eye. More recently, the potential to use this kind of technology to support human vision and augment human perception has become actively pursued with applications such as compensating for visual impairments or aiding unimpaired vision. Unfortunately, a systematic analysis of the field is missing. Within this work, we close that gap by presenting a design space for vision augmentations that allows research to systematically explore the field of digital eyewear for vision aid and how it can augment the human visual system. We test our design space against currently available solutions and conceptually develop new solutions. The design space and findings can guide future development and can lead to a consistent categorisation of the many existing approaches.

著者
Tobias Langlotz
University of Otago, Dunedin, New Zealand
Jonathan Sutton
Copenhagen University, Copenhagen, Denmark
Holger Regenbrecht
University of Otago, Dunedin, Otago, New Zealand
論文URL

doi.org/10.1145/3613904.3642380

動画
“I never realized sidewalks were a big deal”: A Case Study of a Community-Driven Sidewalk Accessibility Assessment using Project Sidewalk
要旨

Despite decades of effort, pedestrian infrastructure in cities continues to be unsafe or inaccessible to people with disabilities. In this paper, we examine the potential of community-driven digital civics to assess sidewalk accessibility through a deployment study of an open-source crowdsourcing tool called Project Sidewalk. We explore Project Sidewalk's potential as a platform for civic learning and service. Specifically, we assess its effectiveness as a tool for community members to learn about human mobility, urban planning, and accessibility advocacy. Our findings demonstrate that community-driven digital civics can support accessibility advocacy and education, raise community awareness, and drive pro-social behavioral change. We also outline key considerations for deploying digital civic tools in future community-led accessibility initiatives.

著者
Chu Li
University of Washington, Seattle, Washington, United States
Katrina Oi Yau. Ma
University of Washington, Seattle, Washington, United States
Michael Saugstad
University of Washington, Seattle, Washington, United States
Kie Fujii
Hackensack Meridian School of Medicine, Nutley, New Jersey, United States
Molly Delaney
University of Illinois at Chicago, Chicago, Illinois, United States
Yochai Eisenberg
University of Illinois, Chicago, Chicago, Illinois, United States
Delphine Labbé
University of Illinois, Chicago, Chicago, Illinois, United States
Judy L. Shanley
Easterseals, Boston, Massachusetts, United States
Devon Snyder
University of Illinois at Chicago, Chicago, Illinois, United States
Florian P P. Thomas
Hackensack Meridian School of Medicine, Hackensack, New Jersey, United States
Jon E.. Froehlich
University of Washington, Seattle, Washington, United States
論文URL

doi.org/10.1145/3613904.3642003

動画
A Virtual Reality Scene Taxonomy: Identifying and Designing Accessible Scene-Viewing Techniques
要旨

Virtual environments (VEs) afford similar interactions to those in physical environments: individuals can navigate and manipulate objects. Yet, a prerequisite for these interactions is being able to view the environment. Despite the existence of numerous scene-viewing techniques (i.e., interaction techniques that facilitate the visual perception of virtual scenes), there is no guidance to help designers choose which techniques to implement. We propose a scene taxonomy based on the visual structure and task within a VE by drawing on literature from cognitive psychology and computer vision, as well as VR applications. We demonstrate how the taxonomy can be used by applying it to an accessibility problem, namely limited head mobility. We used the taxonomy to classify existing scene-viewing techniques and generate three new techniques that did not require head movement. In our evaluation of the techniques with 16 participants, we discovered that participants identified trade-offs in design considerations such as accessibility, realism, and spatial awareness, that would influence whether they would use the new techniques. Our results demonstrate the potential of the scene taxonomy to help designers reason about the relationships between VR interactions, tasks, and environments.

著者
Rachel L.. Franz
University of Washington, Seattle, Washington, United States
Sasa Junuzovic
Microsoft Research, Redmond, Washington, United States
Martez E. Mott
Microsoft Research, Redmond, Washington, United States
動画