ChainForge: A Visual Toolkit for Prompt Engineering and LLM Hypothesis Testing

要旨

Evaluating outputs of large language models (LLMs) is challenging, requiring making—and making sense of—many responses. Yet tools that go beyond basic prompting tend to require knowledge of programming APIs, focus on narrow domains, or are closed-source. We present ChainForge, an open-source visual toolkit for prompt engineering and on-demand hypothesis testing of text generation LLMs. ChainForge provides a graphical interface for comparison of responses across models and prompt variations. Our system was designed to support three tasks: model selection, prompt template design, and hypothesis testing (e.g., auditing). We released ChainForge early in its development and iterated on its design with academics and online users. Through in-lab and interview studies, we find that a range of people could use ChainForge to investigate hypotheses that matter to them, including in real-world settings. We identify three modes of prompt engineering and LLM hypothesis testing: opportunistic exploration, limited evaluation, and iterative refinement.

受賞
Honorable Mention
著者
Ian Arawjo
Harvard University, Cambridge, Massachusetts, United States
Chelse Swoopes
Harvard University, Cambridge, Massachusetts, United States
Priyan Vaithilingam
Harvard University, Cambridge, Massachusetts, United States
Martin Wattenberg
Harvard, Boston, Massachusetts, United States
Elena L.. Glassman
Harvard University, Cambridge, Massachusetts, United States
論文URL

https://doi.org/10.1145/3613904.3642016

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Evaluating AI Technologies B

320 'Emalani Theater
5 件の発表
2024-05-14 18:00:00
2024-05-14 19:20:00