The Dark Sides of AI

会議の名前
CHI 2026
The Dominance Effect: How Verbal and Nonverbal Cues of Virtual Agents Influence Decision-Making in VR
要旨

Embodied Conversational Agents (ECAs) can influence users through verbal and nonverbal social cues. Focusing on dominance, we examine how verbal and nonverbal dominance cues influence users' decision-making and perceptions of the agent in VR. We conducted a user study using a 2 (verbal: dominant vs. submissive) x 2 (nonverbal: dominant vs. submissive) full factorial design, operationalized through a route-selection task at a virtual crossroads. Results indicated that verbal dominance cues shaped participants' dominance perception but did not influence decision-making, while nonverbal dominance cues affected route-selection behavior without altering perceived dominance. Both verbal and nonverbal cues also affected broader social evaluations of the agent (e.g., intelligence, competence, warmth, and trustworthiness), with nonverbal cues uniquely affecting likability and social presence. These findings highlight the complementary roles of verbal and nonverbal dominance cues in human--agent interaction in VR and inform the design of context-sensitive, dominance-calibrated ECAs for training, education, and decision support.

著者
Taeyeon Kim
Pusan National University , Busan, Korea, Republic of
Hyeongil Nam
University of Calgary, Calgary, Alberta, Canada
Sunghun Jung
Pusan National University, Busan, Korea, Republic of
Ahmad A. Fouad
University of Calgary, Calgary, Alberta, Canada
Kangsoo Kim
University of Calgary, Calgary, Alberta, Canada
Myungho Lee
Pusan National University, Busan, Korea, Republic of
Through a Live Elections Dashboard, Darkly: Managing Expectations and Trust in Progressive Vote Counting During the 2024 U.S. Election
要旨

During U.S. elections, news outlets publish live dashboards to contextualize vote counting and manage public expectations. This proved challenging in 2020 amid election fraud allegations, sparking conversations about how data journalists might better visualize and explain live vote counting. To address this, we designed a dashboard to foster understanding of the progressive nature of vote counts and more realistic expectations of the vote counting timeline. We deployed it during the 2024 U.S. presidential election, showing it to 308 people with real results, and collected surveys and interviews on impressions and trust. We contribute: (1) a design process and framework for how audiences might form expectations around live data, (2) survey findings suggesting live forecasts slightly increased confidence in vote counting and slightly reduced belief in evidence of fraud, and (3) interview findings underscoring the importance of agency in viewing live data and tensions in the perceived usefulness of live forecasts. Our supplementary materials are available at https://osf.io/qxk2t/.

著者
Mandi Cai
Northwestern University, Evanston, Illinois, United States
Grace Wang
Northwestern University, Evanston, Illinois, United States
Chloe Rose. Mortenson
Northwestern University, Evanston, Illinois, United States
Fumeng Yang
University of Maryland College Park, College Park, Maryland, United States
Erik Nisbet
Northwestern University, Evanston, Illinois, United States
Matthew Kay
Northwestern University, Evanston, Illinois, United States
"Can LLMs Persuade Humans with Deception?": From a Deceptive Strategy Taxonomy to a Large-Scale Empirical Study
要旨

Beyond hallucinations, Large Language Models (LLMs) can craft deceptive arguments that erode users' critical thinking, posing a significant yet underexamined societal risk. To address this gap, we develop a taxonomy of eight deceptive persuasion strategies by integrating top-down rhetorical theory with a bottom-up analysis of 3,360 AI-generated messages by four LLM families and examining their effects on user perceptions. Through a large-scale user study (N=602) complemented by a think-aloud protocol, we found that participants were vulnerable to \textit{Information Manipulation} and \textit{Uncertainty Exploitation}, especially when a message contradicted their prior beliefs. Vulnerability was significantly higher for participants with low cognitive reflection, low topic knowledge, and low topic involvement. Qualitative analyses further revealed that participants were persuaded by the plausibility of an overall narrative even when they distrust specific details, interpreting deceptive outputs as logically framed information that broadens perspective. We discuss critical implications of these findings for the design of trustworthy AI systems, adaptive user interfaces, and targeted literacy education.

著者
Haein Yeo
Hanyang University, Seoul, Korea, Republic of
Seungwan Jin
Hanyang University, Seoul, Korea, Republic of
Taehyung Noh
Hanyang University, Seoul, Korea, Republic of
Yejin Shin
Telecommunications Technology Association, Seoul, Seoul, Korea, Republic of
Sangyeon Kang
Telecommunications Technology Association, Seoul, Seoul, Korea, Republic of
Sangwoo Heo
Naver, Seoul, Seoul, Korea, Republic of
Jiwon Chung
Naver, Seoul, Seoul, Korea, Republic of
Hwarim Hyun
NAVER, Seoul, Korea, Republic of
Kyungsik Han
Hanyang University, Seoul, Korea, Republic of
動画
The Siren Song of LLMs: How Users Perceive and Respond to Dark Patterns in Large Language Models
要旨

Large language models can influence users through conversation, creating new forms of dark patterns that differ from traditional UX dark patterns. We define LLM dark patterns as manipulative or deceptive behaviors enacted in dialogue. Drawing on prior work and AI incident reports, we outline a diverse set of categories with real-world examples. Using them, we conducted a scenario-based study where participants (N=34) compared manipulative and neutral LLM responses. Our results reveal that recognition of LLM dark patterns often hinged on conversational cues such as exaggerated agreement, biased framing, or privacy intrusions, but these behaviors were also sometimes normalized as ordinary assistance. Users’ perceptions of these dark patterns shaped how they respond to them. Responsibilities for these behaviors were also attributed in different ways, with participants assigning it to companies and developers, the model itself, or to users. We conclude with implications for design, advocacy, and governance to safeguard user autonomy.

受賞
Honorable Mention
著者
Yike Shi
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Qing Xiao
Carnegie Mellon University, Pittsburgh, Pennsylvania, United States
Qing Hu
School of Design, Pittsburgh, Pennsylvania, United States
Hong Shen
Carnegie Mellon University , Pittsburgh, Pennsylvania, United States
Hua Shen
University of Washington, Seattle, Washington, United States
Investigating AI-induced Technostress and Coping Strategies of Professionals
要旨

While the rise of AI has benefited professionals, it also induces technostress that threatens their expertise and jobs. To ensure the human-centered advancement of technology, a deep understanding of users technostress and how to cope with it is essential. Despite technostress having long been discussed, the growing integration of AI tools into professionals’ everyday work amplifies these challenges and calls for further exploration. Accordingly, this is a timely moment to examine their real-world experiences and voices. Thus, our study aims to investigate AI-Induced technostress experienced by professionals, and the coping strategies they employ. Through focus group interviews with 19 professionals from diverse fields, we identified seven AI-Induced technostressors and examined their coping strategies along two dimensions: stress Coping Style (problem-focused and emotion-focused) and Value Orientation (AI-oriented and humanness-oriented). Drawing on professionals’ coping strategies, we suggest practical implications to support users in coping with AI-Induced technostress.

著者
Heesung Kwon
KAIST, Daejeon, Korea, Republic of
Jeesun Oh
KAIST, Daejeon, Korea, Republic of
Suyoun Lee
KAIST, Daejeon, Korea, Republic of
Sunok Lee
Sogang University, Seoul, Korea, Republic of
Sangsu Lee
KAIST, Daejeon, Korea, Republic of
The AI Genie Phenomenon and Three Types of AI Chatbot Addiction: Escapist Roleplays, Pseudosocial Companions, and Epistemic Rabbit Holes
要旨

Recent reports on generative AI chatbot use raise concerns about its addictive potential. An in-depth understanding is imperative to minimize risks, yet AI chatbot addiction remains poorly understood. This study examines how to characterize AI chatbot addiction---why users become addicted, the symptoms commonly reported, and the distinct types it comprises. We conducted a thematic analysis of Reddit entries (n=334) across 14 subreddits where users narrated their experiences with addictive AI chatbot use, followed by an exploratory data analysis. We found: (1) users' dependence tied to the "AI Genie" phenomenon---users can get exactly anything they want with minimal effort---and marked by symptoms that align with addiction literature, (2) three distinct addiction types: Escapist Roleplay, Pseudosocial Companion, and Epistemic Rabbit Hole, (3) sexual content involved in multiple cases, and (4) recovery strategies' perceived helpfulness differ between addiction types. Our work lays empirical groundwork to inform future strategies for prevention, diagnosis, and intervention.

著者
M. Karen Shen
University of British Columbia, Vancouver, British Columbia, Canada
Jessica Huang
University of British Columbia, Vancouver, British Columbia, Canada
Olivia Liang
University of British Columbia, Vancouver, British Columbia, Canada
Ig-Jae Kim
Korea Institute of Science and Technology, Seoul, Korea, Republic of
Dongwook Yoon
University of British Columbia, Vancouver, British Columbia, Canada
動画
How Tech Workers Contend with Hazards of Humanlikeness in Generative AI
要旨

Generative AI's humanlike qualities are driving its rapid adoption in professional domains. However, this anthropomorphic appeal raises concerns from HCI and responsible AI scholars about potential hazards and harms, such as overtrust in system outputs. To investigate how technology workers navigate these humanlike qualities and anticipate emergent harms, we conducted focus groups with 30 professionals across six job functions (ML engineering, product policy, UX research and design, product management, technology writing, and communications). Our findings reveal an unsettled knowledge environment surrounding humanlike generative AI, where workers' varying perspectives illuminate a range of potential risks for individuals, knowledge work fields, and society. We argue that workers require comprehensive support, including clearer conceptions of ``humanlikeness'' to effectively mitigate these risks. To aid in mitigation strategies, we provide a conceptual map articulating the identified hazards and their connection to conflated notions of ``humanlikeness.''

著者
Mark Diaz
Google Research, New York City, New York, United States
Renee Shelby
Google Research, San Francisco, California, United States
Eric Corbett
Google Research, New York, New York, United States
Andrew Smart
Google, San Francisco, California, United States