Understanding the Use of a Large Language Model-Powered Guide to Make Virtual Reality Accessible for Blind and Low Vision People

要旨

As social virtual reality (VR) grows more popular, addressing accessibility for blind and low vision (BLV) users is increasingly critical. Researchers have proposed an AI “sighted guide” to help users navigate VR and answer their questions, but it has not been studied with users. To address this gap, we developed a large language model (LLM)-powered guide and studied its use with 16 BLV participants in virtual environments with confederates posing as other users. We found that when alone, participants treated the guide as a tool, but treated it companionably around others, giving it nicknames, rationalizing its mistakes with its appearance, and encouraging confederate-guide interaction. Our work furthers understanding of guides as a versatile method for VR accessibility and presents design recommendations for future guides.

著者
Jazmin Collins
Cornell University, Ithaca, New York, United States
Sharon Y. Lin
Cornell University, New York City, New York, United States
Tianqi Liu
Cornell University, Ithaca, New York, United States
Andrea Stevenson Won
Cornell University, Ithaca, New York, United States
Shiri Azenkot
Cornell Tech, New York, New York, United States
動画

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Non-visual and conversational experiences

P1 - Room 125
6 件の発表
2026-04-17 18:00:00
2026-04-17 19:30:00