Drones have gained traction as a versatile form of assistive robots for Blind and Low Vision (BLV) people. Nonetheless, novel interaction techniques are required to enable BLV people to communicate with drones naturally. In this work, we built an LLM-powered assistive drone for BLV users. We leverage an LLM to translate high-level user goals to step-by-step instructions for the drone and to extract visual information from the images. Through a formative study with BLV users (N=9), we identified envisioned use cases and desired interaction modalities. Then, we took a participatory and iterative approach to build a prototype, incorporating feedback received from 3 BLV users, as well as 5 domain experts. Finally, we conducted a user study with an additional 6 BLV participants to evaluate the iterated prototype, and received positive feedback. This work is contributing to a growing body of research on harnessing the power of LLMs to build a more inclusive world.
ACM CHI Conference on Human Factors in Computing Systems