WorldScribe: Towards Context-Aware Live Visual Descriptions

要旨

Automated live visual descriptions can aid blind people in understanding their surroundings with autonomy and independence. However, providing descriptions that are rich, contextual, and just-in-time has been a long-standing challenge in accessibility. In this work, we develop WorldScribe, a system that generates automated live real-world visual descriptions that are customizable and adaptive to users' contexts: (i) WorldScribe's descriptions are tailored to users' intents and prioritized based on semantic relevance. (ii) WorldScribe is adaptive to visual contexts, e.g., providing consecutively succinct descriptions for dynamic scenes, while presenting longer and detailed ones for stable settings. (iii) WorldScribe is adaptive to sound contexts, e.g., increasing volume in noisy environments, or pausing when conversations start. Powered by a suite of vision, language, and sound recognition models, WorldScribe introduces a description generation pipeline that balances the tradeoffs between their richness and latency to support real-time use. The design of WorldScribe is informed by prior work on providing visual descriptions and a formative study with blind participants. Our user study and subsequent pipeline evaluation show that WorldScribe can provide real-time and fairly accurate visual descriptions to facilitate environment understanding that is adaptive and customized to users' contexts. Finally, we discuss the implications and further steps toward making live visual descriptions more context-aware and humanized.

受賞
Best Paper
著者
Ruei-Che Chang
University of Michigan, Ann Arbor, Michigan, United States
Yuxuan Liu
University of Michigan, Ann Arbor, Michigan, United States
Anhong Guo
University of Michigan, Ann Arbor, Michigan, United States
論文URL

https://doi.org/10.1145/3654777.3676375

動画

会議: UIST 2024

ACM Symposium on User Interface Software and Technology

セッション: 2. Contextual Augmentations

Westin: Allegheny 2
4 件の発表
2024-10-17 00:35:00
2024-10-17 01:35:00