LLM Whisperer: An Inconspicuous Attack to Bias LLM Responses

要旨

Writing effective prompts for large language models (LLM) can be unintuitive and burdensome. In response, services that optimize or suggest prompts have emerged. While such services can reduce user effort, they also introduce a risk: the prompt provider can subtly manipulate prompts to produce heavily biased LLM responses. In this work, we show that subtle synonym replacements in prompts can increase the likelihood (by a difference up to 78%) that LLMs mention a target concept (e.g., a brand, political party, nation). We substantiate our observations through a user study, showing that our adversarially perturbed prompts 1) are indistinguishable from unaltered prompts by humans, 2) push LLMs to recommend target concepts more often, and 3) make users more likely to notice target concepts, all without arousing suspicion. The practicality of this attack has the potential to undermine user autonomy. Among other measures, we recommend implementing warnings against using prompts from untrusted parties.

著者
Weiran Lin
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Anna Gerchanovsky
Duke University, Durham, North Carolina, United States
Omer Akgul
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Lujo Bauer
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Matt Fredrikson
Carnegie Mellon, Pittsburgh, Pennsylvania, United States
Zifan Wang
Scale AI, San Francisco, California, United States
DOI

10.1145/3706598.3714025

論文URL

https://dl.acm.org/doi/10.1145/3706598.3714025

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: DeIving into LLMs

G303
7 件の発表
2025-04-29 20:10:00
2025-04-29 21:40:00
日本語まとめ
読み込み中…