(Computer) Vision in Action: Comparing Remote Sighted Assistance and a Multimodal Voice Agent in Inspection Sequences

要旨

Does human–AI assistance unfold in the same way as human–human assistance? This research explores what can be learned from the expertise of blind individuals and sighted volunteers to inform the design of multimodal voice agents and address the enduring challenge of proactivity. Drawing on granular analysis of two representative fragments from a larger corpus, we contrast the practices co-produced by an experienced human remote sighted assistant and a blind participant—as they collaborate to find a stain on a blanket over the phone—with those achieved when the same participant worked with a multimodal voice agent on the same task, a few moments earlier. This comparison enables us to specify precisely which fundamental proactive practices the agent did not enact in situ. We conclude that, so long as multimodal voice agents cannot produce environmentally occasioned vision-based actions, they will lack a key resource relied upon by human remote sighted assistants.

著者
Damien Rudaz
University of Copenhagen, Copenhagen, Denmark
Barbara Nino. Carreras
University of Copenhagen , Copenhagen, Denmark
Sara Merlino
University of Copenhagen, Copenhagen, Denmark
Brian L. Due
University of Copenhagen, Copenhagen, Denmark
Barry Brown
University of Copenhagen, Copenhagen, Denmark

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Designing Care Futures

P1 - Room 131
7 件の発表
2026-04-16 20:15:00
2026-04-16 21:45:00