Objection Overruled! Lay People can Distinguish Large Language Models from Lawyers, but still Favour Advice from an LLM

要旨

Large Language Models (LLMs) are seemingly infiltrating every domain, and the legal context is no exception. In this paper, we present the results of three experiments (total N = 288) that investigated lay people's willingness to act upon, and their ability to discriminate between, LLM- and lawyer-generated legal advice. In Experiment 1, participants judged their willingness to act on legal advice when the source of the advice was either known or unknown. When the advice source was unknown, participants indicated that they were significantly more willing to act on the LLM-generated advice. The result of the source unknown condition was replicated in Experiment 2. Intriguingly, despite participants indicating higher willingness to act on LLM-generated advice in Experiments 1 and 2, participants discriminated between the LLM- and lawyer-generated texts significantly above chance-level in Experiment 3. Lastly, we discuss potential explanations and risks of our findings, limitations and future work.

著者
Eike Schneiders
University of Nottingham, Nottingham, United Kingdom
Tina Seabrooke
University of Southampton, Southampton, United Kingdom
Joshua Krook
University of Antwerp, Antwerp, Belgium
Richard Hyde
University of Nottingham, Nottingham, United Kingdom
Natalie Leesakul
University of Nottingham, Nottingham, Nottinghamshire, United Kingdom
Jeremie Clos
University of Nottingham, Nottingham, Nottinghamshire, United Kingdom
Joel E. Fischer
University of Nottingham, Nottingham, United Kingdom
DOI

10.1145/3706598.3713470

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713470

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: Working with AI (or not)

Annex Hall F205
5 件の発表
2025-04-30 18:00:00
2025-04-30 19:30:00
日本語まとめ
読み込み中…