Bridging the Gulf of Envisioning: Cognitive Challenges in Prompt Based Interactions with LLMs

要旨

Large language models (LLMs) exhibit dynamic capabilities and appear to comprehend complex and ambiguous natural language prompts. However, calibrating LLM interactions is challenging for interface designers and end-users alike. A central issue is our limited grasp of how human cognitive processes begin with a goal and form intentions for executing actions, a blindspot even in established interaction models such as Norman's gulfs of execution and evaluation. To address this gap, we theorize how end-users `envision' translating their goals into clear intentions and craft prompts to obtain the desired LLM response. We define a process of \textit{Envisioning} by highlighting three misalignments on not knowing: (1) what the task should be, (2) how to instruct the LLM to do the task, and (3) what to expect for the LLM’s output in meeting the goal. Finally, we make recommendations to narrow the envisioning gulf in human-LLM interactions.

著者
Hariharan Subramonyam
Stanford University, Stanford, California, United States
Christopher Pondoc
Stanford University, Stanford, California, United States
Colleen Seifert
U Michigan, Ann Arbor, Michigan, United States
Maneesh Agrawala
Stanford University, Stanford, California, United States
Roy Pea
Stanford University, Stanford, California, United States
論文URL

doi.org/10.1145/3613904.3642754

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: Working with Data B

316B
5 件の発表
2024-05-14 20:00:00
2024-05-14 21:20:00