Inclusive Interactions: Accessibility Techniques and Systems

会議の名前
UIST 2023
BrushLens: Hardware Interaction Proxies for Accessible Touchscreen Interface Actuation
要旨

Touchscreen devices, designed with an assumed range of user abilities and interaction patterns, often present challenges for individuals with diverse abilities to operate independently. Prior efforts to improve accessibility through tools or algorithms necessitated alterations to touchscreen hardware or software, making them inapplicable for the large number of existing legacy devices. In this paper, we introduce BrushLens, a hardware interaction proxy that performs physical interactions on behalf of users while allowing them to continue utilizing accessible interfaces, such as screenreaders and assistive touch on smartphones, for interface exploration and command input. BrushLens maintains an interface model for accurate target localization and utilizes exchangeable actuators for physical actuation across a variety of device types, effectively reducing user workload and minimizing the risk of mistouch. Our evaluations reveal that BrushLens lowers the mistouch rate and empowers visually and motor impaired users to interact with otherwise inaccessible physical touchscreens more effectively.

著者
Chen Liang
University of Michigan, Ann Arbor, Michigan, United States
Yasha Iravantchi
University of Michigan, Ann Arbor, Michigan, United States
Thomas Krolikowski
The University of Michigan, Ann Arbor, Michigan, United States
Ruijie Geng
University of Michigan - Ann Arbor, Ann Arbor, Michigan, United States
Alanson P.. Sample
University of Michigan, Ann Arbor, Michigan, United States
Anhong Guo
University of Michigan, Ann Arbor, Michigan, United States
論文URL

https://doi.org/10.1145/3586183.3606730

動画
TacNote: Tactile and Audio Note-Taking for Non-Visual Access
要旨

Blind and visually impaired (BVI) people primarily rely on non-visual senses to interact with a physical environment. Doing so requires a high cognitive load to perceive and memorize the presence of a large set of objects, such as at home or in a learning setting. In this work, we explored opportunities to enable object-centric note-taking by using a 3D printing pen for interactive, personalized tactile annotations. We first identified the benefits and challenges of self-created tactile graphics in a formative diary study. Then, we developed TacNote, a system that enables BVI users to annotate, explore, and memorize critical information associated with everyday objects. Using TacNote, the users create tactile graphics with a 3D printing pen and attach them to the target objects. They capture and organize the physical labels by using TacNote’s camera-based mobile app. In addition, they can specify locations, ordering, and hierarchy via finger-pointing interaction and receive audio feedback. Our user study with ten BVI participants showed that TacNote effectively alleviated the memory burden, offering a promising solution for enhancing users’ access to information.

著者
Wan-Chen Lee
National Taiwan University , Taipei , Taiwan
Ching-Wen Hung
National Taiwan University, Taipei, Taiwan
Chao-Hsien Ting
National Taiwan University , Taipei, Taiwan, Taipei, Taiwan
Peggy Chi
National Taiwan University, Taipei, Taiwan
Bing-Yu Chen
National Taiwan University, Taipei, Taiwan
論文URL

https://doi.org/10.1145/3586183.3606784

動画
GenAssist: Making Image Generation Accessible
要旨

Blind and low vision (BLV) creators use images to communicate with sighted audiences. However, creating or retrieving images is challenging for BLV creators as it is difficult to use authoring tools or assess image search results. Thus, creators limit the types of images they create or recruit sighted collaborators. While text-to-image generation models let creators generate high-fidelity images based on a text description (i.e. prompt), it is difficult to assess the content and quality of generated images. We present GenAssist, a system to make text-to-image generation accessible. Using our interface, creators can verify whether generated image candidates followed the prompt, access additional details in the image not specified in the prompt, and skim a summary of similarities and differences between image candidates. To power the interface, GenAssist uses a large language model to generate visual questions, vision-language models to extract answers, and a large language model to summarize the results. Our study with 12 BLV creators demonstrated that GenAssist enables and simplifies the process of image selection and generation, making visual authoring more accessible to all.

受賞
Best Paper
著者
Mina Huh
University of Texas, Austin, Austin, Texas, United States
Yi-Hao Peng
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Amy Pavel
University of Texas, Austin, Austin, Texas, United States
論文URL

https://doi.org/10.1145/3586183.3606735

動画
Front Row: Automatically Generating Immersive Audio Representations of Tennis Broadcasts for Blind Viewers
要旨

Blind and low-vision (BLV) people face challenges watching sports due to the lack of accessibility of sports broadcasts. Currently, BLV people rely on descriptions from TV commentators, radio announcers, or their friends to understand the game. These descriptions, however, do not allow BLV viewers to visualize the action by themselves. We present Front Row, a system that automatically generates an immersive audio representation of sports broadcasts, specifically tennis, allowing BLV viewers to more directly perceive what is happening in the game. Front Row first recognizes gameplay from the video feed using computer vision, then renders players’ positions and shots via spatialized (3D) audio cues. User evaluations with 12 BLV participants show that Front Row gives BLV viewers a more accurate understanding of the game compared to TV and radio, enabling viewers to form their own opinions on players' moods and strategies. We discuss future implications of Front Row and illustrate several applications, including a Front Row plug-in for video streaming platforms to enable BLV people to visualize the action in sports videos across the Web.

著者
Gaurav Jain
Columbia University, New York, New York, United States
Basel Hindi
Columbia University , New York, New York, United States
Connor Courtien
Hunter College, New York, New York, United States
Xin Yi Therese Xu
Pomona College, Claremont, California, United States
Conrad Wyrick
University of Florida, Gainesville, Florida, United States
Michael C. Malcolm
SUNY At Albany, Albany, New York, United States
Brian A.. Smith
Columbia University, New York, New York, United States
論文URL

https://doi.org/10.1145/3586183.3606830

動画
V-DAT (Virtual Reality Data Analysis Tool): Supporting Self-Awareness for Autistic People from Multimodal VR Sensor Data
要旨

Virtual reality (VR) has become a valuable tool for social and educational purposes for autistic people, as it provides flexible environmental support to create a variety of experiences. A growing body of recent research has examined the behaviors of autistic people using sensor-based data to better understand autistic people and investigate the effectiveness of VR. Comprehensive analysis of the various signals that can be easily collected in the VR environment can promote understanding of autistic people. While this quantitative evidence has the potential to help both autistic people and others (e.g., autism experts) to understand behaviors of autistic people, existing studies have focused on single signal analysis and have not determined the acceptability of signal analysis results from the autistic person's point of view. To facilitate the use of multiple sensor signals in VR for autistic people and experts, we introduce V-DAT (Virtual Reality Data Analysis Tool), designed to support a VR sensor data handling pipeline. V-DAT takes into account four sensor modalities - head position and rotation, eye movement, audio, and physiological signals - that are actively used in current VR research for autistic people. We explain the characteristics and processing methods of the data for each modality as well as the analysis with comprehensive visualizations of V-DAT. We also conduct a case study to investigate the feasibility of V-DAT as a way of broadening understanding of autistic people from the perspectives of both autistic people and autism experts. Finally, we discuss issues with the process of V-DAT development and complementary measures for the applicability and scalability of a sensor data management system for autistic people.

著者
Bogoan Kim
Hanyang University, Seoul, Korea, Republic of
Dayoung Jeong
Hanyang University, Seoul, Korea, Republic of
Jennifer G. Kim
Georgia Institute of Technology, Atlanta, Georgia, United States
Hwajung Hong
KAIST, Deajeon, Korea, Republic of
Kyungsik Han
Hanyang University, Seoul, Korea, Republic of
論文URL

https://doi.org/10.1145/3586183.3606797

動画
Starrypia: An AR Gamified Music Adjuvant Treatment Application for Children with Autism Based on Combined Therapy
要旨

In this paper, we present Starrypia, a lightweight gamified music adjuvant treatment application to improve the symptoms of mild autistic children, eliminating the geographical and time constraints faced by traditional treatment. Adopting ABA (Applied Behavior Analysis) behavioral theory as the principle, Starrypia follows the stimulus-response-reinforcement-pause process and incorporates music therapy and sensory integration. Based on AR, Starrypia provides multi-sensory intervention through music generated by BiLSTM deep model, 3D visual scenes, touch interaction to keep children focused and calm. We conducted a controlled experiment on 20 children to test Starrypia’s effectiveness and attraction. Children’s pre-test and post-test scores on two autism rating scales and performance during the test were applied to measure their abilities and engagement. Experimental results indicated that children showed great interest in Starrypia and presented evident symptom remission and advance in overall abilities after 4 weeks of use. In conclusion, Starrypia is practicable in both therapeutic effect and user experience, and conspicuously instrumental in promoting sensory ability.

著者
Yu Cai
Shanghai Jiaotong University, Shanghai, China
Zhao Liu
Shanghai Jiaotong University, Shanghai, China
Zhuo Yang
East China University of Science and Technology, Shanghai, China
Yilan Tan
Shanghai Jiao Tong University, Shanghai, China
Junwei Zhang
Shanghai Jiao Tong University, Shanghai, China
Shuo Tang
Tongji University, Shanghai, China
論文URL

https://doi.org/10.1145/3586183.3606755

動画