Generative AI (GenAI) tools are increasingly used for spreadsheet tasks, yet little is known about how blind users verify their outputs in accuracy-critical contexts. We conducted a study with 12 blind spreadsheet users to explore verification practices across tasks such as information extraction, formula generation, trend analysis, chart creation, and formatting. Participants never fully trusted outputs without verification and employed diverse strategies, including manual checks with screen reader and spreadsheet features, same AI-assisted verification, cross-AI tool validation, leveraging prior knowledge, and human assistance. These approaches were adapted based on task context, perceived risk, and users’ expertise. Errors were common, particularly in chart generation and formatting, some detected, others overlooked. While verification improved confidence, it was often effortful, time-consuming, or infeasible for visual tasks. We discuss how blind users utilize GenAI not only as a task performer but also as a verification aid and validator, highlighting design opportunities for more accessible and reliable spreadsheet use.
ACM CHI Conference on Human Factors in Computing Systems