“As an AI language model, I cannot”: Investigating LLM Denials of User Requests

要旨

Users ask large language models (LLMs) to help with their homework, for lifestyle advice, or for support in making challenging decisions. Yet LLMs are often unable to fulfil these requests, either as a result of their technical inabilities or policies restricting their responses. To investigate the effect of LLMs denying user requests, we evaluate participants' perceptions of different denial styles. We compare specific denial styles (baseline, factual, diverting, and opinionated) across two studies, respectively focusing on LLM's technical limitations and their social policy restrictions. Our results indicate significant differences in users' perceptions of the denials between the denial styles. The baseline denial, which provided participants with brief denials without any motivation, was rated significantly higher on frustration and significantly lower on usefulness, appropriateness, and relevance. In contrast, we found that participants generally appreciated the diverting denial style. We provide design recommendations for LLM denials that better meet peoples' denial expectations.

受賞
Honorable Mention
著者
Joel Wester
Aalborg University, Aalborg, Denmark
Tim Schrills
Institut for Multimedia and Interactive Systems, University of Luebeck, Luebeck, Germany
Henning Pohl
Aalborg University, Aalborg, Denmark
Niels van Berkel
Aalborg University, Aalborg, Denmark
論文URL

https://doi.org/10.1145/3613904.3642135

動画

会議: CHI 2024

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2024.acm.org/)

セッション: User Studies on Large Language Models

314
5 件の発表
2024-05-13 20:00:00
2024-05-13 21:20:00