Investigating Use Cases of AI-Powered Scene Description Applications for Blind and Low Vision People

要旨

“Scene description” applications that describe visual content in a photo are useful daily tools for blind and low vision (BLV) people. Researchers have studied their use, but they have only explored those that leverage remote sighted assistants; little is known about applications that use AI to generate their descriptions. Thus, to investigate their use cases, we conducted a two-week diary study where 16 BLV participants used an AI-powered scene description application we designed. Through their diary entries and follow-up interviews, users shared their information goals and assessments of the visual descriptions they received. We analyzed the entries and found frequent use cases, such as identifying visual features of known objects, and surprising ones, such as avoiding contact with dangerous objects. We also found users scored the descriptions relatively low on average, 2.7 out of 5 (SD=1.5) for satisfaction and 2.4 out of 4 (SD=1.2) for trust, showing that descriptions still need significant improvements to deliver satisfying and trustworthy experiences. We discuss future opportunities for AI as it becomes a more powerful accessibility tool for BLV users.

著者
Ricardo E.. Gonzalez Penuela
Cornell Tech, Cornell University, New York, New York, United States
Jazmin Collins
Cornell University, Ithaca, New York, United States
Cynthia L. Bennett
Google, New York, New York, United States
Shiri Azenkot
Cornell Tech, New York, New York, United States
論文URL

doi.org/10.1145/3613904.3642211

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Supporting Accessibility of Text, Image and Video B

313B
5 件の発表
2024-05-14 23:00:00
2024-05-15 00:20:00