The Siren Song of LLMs: How Users Perceive and Respond to Dark Patterns in Large Language Models

要旨

Large language models can influence users through conversation, creating new forms of dark patterns that differ from traditional UX dark patterns. We define LLM dark patterns as manipulative or deceptive behaviors enacted in dialogue. Drawing on prior work and AI incident reports, we outline a diverse set of categories with real-world examples. Using them, we conducted a scenario-based study where participants (N=34) compared manipulative and neutral LLM responses. Our results reveal that recognition of LLM dark patterns often hinged on conversational cues such as exaggerated agreement, biased framing, or privacy intrusions, but these behaviors were also sometimes normalized as ordinary assistance. Users’ perceptions of these dark patterns shaped how they respond to them. Responsibilities for these behaviors were also attributed in different ways, with participants assigning it to companies and developers, the model itself, or to users. We conclude with implications for design, advocacy, and governance to safeguard user autonomy.

受賞
Honorable Mention
著者
Yike Shi
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Qing Xiao
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Qing Hu
School of Design, Pittsburgh, Pennsylvania, United States
Hong Shen
Carnegie Mellon University , Pittsburgh, Pennsylvania, United States
Hua Shen
University of Washington, Seattle, Washington, United States

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: The Dark Sides of AI

Area 1 + 2 + 3: theatre
7 件の発表
2026-04-17 20:15:00
2026-04-17 21:45:00