The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions

要旨

Large Language Model-powered conversational agents (CAs) are increasingly capable of projecting sophisticated personalities through language, but how these projections affect users is unclear. We thus examine how CA personalities expressed linguistically affect user decisions and perceptions in the context of charitable giving. In a crowdsourced study, 360 participants interacted with one of eight CAs, each projecting a personality composed of three linguistic aspects: attitude (optimistic/pessimistic), authority (authoritative/submissive), and reasoning (emotional/rational). While the CA's composite personality did not affect participants' decisions, it did affect their perceptions and emotional responses. Particularly, participants interacting with pessimistic CAs felt lower emotional state and lower affinity towards the cause, perceived the CA as less trustworthy and less competent, and yet tended to donate more toward the charity. Perceptions of trust, competence, and situational empathy significantly predicted donation decisions. Our findings emphasize the risks CAs pose as instruments of manipulation, subtly influencing user perceptions and decisions.

受賞
Honorable Mention
著者
Hüseyin Uğur Genç
TU Delft, Delft, Netherlands
Heng Gu
TU Delft, Delft, Netherlands
Chadha Degachi
TU Delft, Delft, Netherlands
Evangelos Niforatos
TU Delft, Delft, Netherlands
Senthil Chandrasegaran
TU Delft, Delft, Netherlands
Himanshu Verma
TU Delft, Delft, Netherlands

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Human Behavior with AI Systems

M2 - Room M211/212
7 件の発表
2026-04-14 20:15:00
2026-04-14 21:45:00