Forage: Understanding LLM-facilitated sensemaking of conversation data

要旨

Large language models (LLMs) are increasingly used to make sense of unstructured data, but their use in contexts like conversation analysis, where sensemaking is often human-driven, iterative, and subjective, is underexplored. We introduce Forage, a retrieval-augmented generation (RAG) tool for making sense of conversation data through exploratory search. We report on user studies with 27 participants across four user groups—including NPR journalists and municipal staff—observing how Forage is used to explore and analyze conversation data. We find Forage supports insight confirmation and generation, providing structure and novel insight about search results compared to a non-LLM-enabled search tool. Driven by the goal of generating multiple perspectives in Forage, we present Wild Forage, a design provocation that generates and presents multiple interpretations of the same data along a specified axis of potential interpretive difference, like political orientation. Expert user studies show that exposure to other interpretations through Wild Forage can help users meaningfully consider their assumptions in sensemaking, but findings also highlight the importance of future work to leverage this opportunity while avoiding potential harms, like stereotyping.

著者
Hope Schroeder
MIT, Cambridge, Massachusetts, United States
Doug Beeferman
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Maya E. Detwiller
MIT, Cambridge, Massachusetts, United States
Dimitra Dimitrakopoulou
Massachusetts Institute of Technology, Cambridge, Massachusetts, United States
Deb Roy
MIT, Cambridge, Massachusetts, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Sensemaking

P1 - Room 130
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00