The HaLLMark Effect: Supporting Provenance and Transparent Use of Large Language Models in Writing with Interactive Visualization

要旨

The use of Large Language Models (LLMs) for writing has sparked controversy both among readers and writers. On one hand, writers are concerned that LLMs will deprive them of agency and ownership, and readers are concerned about spending their time on text generated by soulless machines. On the other hand, AI-assistance can improve writing as long as writers can conform to publisher policies, and as long as readers can be assured that a text has been verified by a human. We argue that a system that captures the provenance of interaction with an LLM can help writers retain their agency, conform to policies, and communicate their use of AI to publishers and readers transparently. Thus we propose HaLLMark, a tool for visualizing the writer's interaction with the LLM. We evaluated HaLLMark with 13 creative writers, and found that it helped them retain a sense of control and ownership of the text.

著者
Md Naimul Hoque
University of Maryland, College Park, Maryland, United States
Tasfia Mashiat
George Mason University, Fairfax, Virginia, United States
Bhavya Ghai
Amazon, New York, New York, United States
Cecilia D.. Shelton
University of Maryland, College Park, Maryland, United States
Fanny Chevalier
University of Toronto, Toronto, Ontario, Canada
Kari Kraus
University of Maryland, College Park, Maryland, United States
Niklas Elmqvist
Aarhus University, Aarhus, Denmark
論文URL

https://doi.org/10.1145/3613904.3641895

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Writing and AI A

311
4 件の発表
2024-05-16 18:00:00
2024-05-16 19:20:00