Large language models (LLMs) are increasingly used to make sense of unstructured data, but their use in contexts like conversation analysis, where sensemaking is often human-driven, iterative, and subjective, is underexplored. We introduce Forage, a retrieval-augmented generation (RAG) tool for making sense of conversation data through exploratory search. We report on user studies with 27 participants across four user groups—including NPR journalists and municipal staff—observing how Forage is used to explore and analyze conversation data. We find Forage supports insight confirmation and generation, providing structure and novel insight about search results compared to a non-LLM-enabled search tool. Driven by the goal of generating multiple perspectives in Forage, we present Wild Forage, a design provocation that generates and presents multiple interpretations of the same data along a specified axis of potential interpretive difference, like political orientation. Expert user studies show that exposure to other interpretations through Wild Forage can help users meaningfully consider their assumptions in sensemaking, but findings also highlight the importance of future work to leverage this opportunity while avoiding potential harms, like stereotyping.
ACM CHI Conference on Human Factors in Computing Systems