Supporting Sensemaking of Large Language Model Outputs at Scale

要旨

Large language models (LLMs) are capable of generating multiple responses to a single prompt, yet little effort has been expended to help end-users or system designers make use of this capability. In this paper, we explore how to present many LLM responses at once. We design five features, which include both pre-existing and novel methods for computing similarities and differences across textual documents, as well as how to render their outputs. We report on a controlled user study (n=24) and eight case studies evaluating these features and how they support users in different tasks. We find that the features support a wide variety of sensemaking tasks and even make tasks tractable that our participants previously considered to be too difficult to attempt. Finally, we present design guidelines to inform future explorations of new LLM interfaces.

受賞
Honorable Mention
著者
Katy Ilonka. Gero
Harvard University, Cambridge, Massachusetts, United States
Chelse Swoopes
Harvard University, Cambridge, Massachusetts, United States
Ziwei Gu
Harvard University, Cambridge, Massachusetts, United States
Jonathan K.. Kummerfeld
The University of Sydney, Sydney, NSW, Australia
Elena L.. Glassman
Harvard University, Allston, Massachusetts, United States
論文URL

doi.org/10.1145/3613904.3642139

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Sensemaking with AI A

324
5 件の発表
2024-05-16 01:00:00
2024-05-16 02:20:00