RELIC: Investigating Large Language Model Responses using Self-Consistency

要旨

Large Language Models (LLMs) are notorious for blending fact with fiction and generating non-factual content, known as hallucinations. To address this challenge, we propose an interactive system that helps users gain insight into the reliability of the generated text. Our approach is based on the idea that the self-consistency of multiple samples generated by the same LLM relates to its confidence in individual claims in the generated texts. Using this idea, we design RELIC, an interactive system that enables users to investigate and verify semantic-level variations in multiple long-form responses. This allows users to recognize potentially inaccurate information in the generated text and make necessary corrections. From a user study with ten participants, we demonstrate that our approach helps users better verify the reliability of the generated text. We further summarize the design implications and lessons learned from this research for future studies of reliable human-LLM interactions.

著者
Furui Cheng
ETH Zürich, Zürich, Switzerland
Vilém Zouhar
ETH Zurich, Zurich, Switzerland
Simran Arora
Stanford University, Stanford, California, United States
Mrinmaya Sachan
ETH Zurich, Zurich, Switzerland
Hendrik Strobelt
IBM Research AI, Cambridge, Massachusetts, United States
Mennatallah El-Assady
ETH Zürich, Zürich, Switzerland
論文URL

https://doi.org/10.1145/3613904.3641904

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Large Language Models

316A
5 件の発表
2024-05-15 01:00:00
2024-05-15 02:20:00