Embodied Conversational Agents (ECAs) can influence users through verbal and nonverbal social cues. Focusing on dominance, we examine how verbal and nonverbal dominance cues influence users' decision-making and perceptions of the agent in VR.
We conducted a user study using a 2 (verbal: dominant vs. submissive) x 2 (nonverbal: dominant vs. submissive) full factorial design, operationalized through a route-selection task at a virtual crossroads. Results indicated that verbal dominance cues shaped participants' dominance perception but did not influence decision-making, while nonverbal dominance cues affected route-selection behavior without altering perceived dominance. Both verbal and nonverbal cues also affected broader social evaluations of the agent (e.g., intelligence, competence, warmth, and trustworthiness), with nonverbal cues uniquely affecting likability and social presence. These findings highlight the complementary roles of verbal and nonverbal dominance cues in human--agent interaction in VR and inform the design of context-sensitive, dominance-calibrated ECAs for training, education, and decision support.
During U.S. elections, news outlets publish live dashboards to contextualize vote counting and manage public expectations. This proved challenging in 2020 amid election fraud allegations, sparking conversations about how data journalists might better visualize and explain live vote counting. To address this, we designed a dashboard to foster understanding of the progressive nature of vote counts and more realistic expectations of the vote counting timeline. We deployed it during the 2024 U.S. presidential election, showing it to 308 people with real results, and collected surveys and interviews on impressions and trust. We contribute: (1) a design process and framework for how audiences might form expectations around live data, (2) survey findings suggesting live forecasts slightly increased confidence in vote counting and slightly reduced belief in evidence of fraud, and (3) interview findings underscoring the importance of agency in viewing live data and tensions in the perceived usefulness of live forecasts. Our supplementary materials are available at https://osf.io/qxk2t/.
Beyond hallucinations, Large Language Models (LLMs) can craft deceptive arguments that erode users' critical thinking, posing a significant yet underexamined societal risk. To address this gap, we develop a taxonomy of eight deceptive persuasion strategies by integrating top-down rhetorical theory with a bottom-up analysis of 3,360 AI-generated messages by four LLM families and examining their effects on user perceptions. Through a large-scale user study (N=602) complemented by a think-aloud protocol, we found that participants were vulnerable to \textit{Information Manipulation} and \textit{Uncertainty Exploitation}, especially when a message contradicted their prior beliefs. Vulnerability was significantly higher for participants with low cognitive reflection, low topic knowledge, and low topic involvement. Qualitative analyses further revealed that participants were persuaded by the plausibility of an overall narrative even when they distrust specific details, interpreting deceptive outputs as logically framed information that broadens perspective. We discuss critical implications of these findings for the design of trustworthy AI systems, adaptive user interfaces, and targeted literacy education.
Large language models can influence users through conversation, creating new forms of dark patterns that differ from traditional UX dark patterns. We define LLM dark patterns as manipulative or deceptive behaviors enacted in dialogue. Drawing on prior work and AI incident reports, we outline a diverse set of categories with real-world examples. Using them, we conducted a scenario-based study where participants (N=34) compared manipulative and neutral LLM responses. Our results reveal that recognition of LLM dark patterns often hinged on conversational cues such as exaggerated agreement, biased framing, or privacy intrusions, but these behaviors were also sometimes normalized as ordinary assistance. Users’ perceptions of these dark patterns shaped how they respond to them. Responsibilities for these behaviors were also attributed in different ways, with participants assigning it to companies and developers, the model itself, or to users. We conclude with implications for design, advocacy, and governance to safeguard user autonomy.
While the rise of AI has benefited professionals, it also induces technostress that threatens their expertise and jobs. To ensure the human-centered advancement of technology, a deep understanding of users technostress and how to cope with it is essential. Despite technostress having long been discussed, the growing integration of AI tools into professionals’ everyday work amplifies these challenges and calls for further exploration. Accordingly, this is a timely moment to examine their real-world experiences and voices. Thus, our study aims to investigate AI-Induced technostress experienced by professionals, and the coping strategies they employ. Through focus group interviews with 19 professionals from diverse fields, we identified seven AI-Induced technostressors and examined their coping strategies along two dimensions: stress Coping Style (problem-focused and emotion-focused) and Value Orientation (AI-oriented and humanness-oriented). Drawing on professionals’ coping strategies, we suggest practical implications to support users in coping with AI-Induced technostress.
Recent reports on generative AI chatbot use raise concerns about its addictive potential. An in-depth understanding is imperative to minimize risks, yet AI chatbot addiction remains poorly understood. This study examines how to characterize AI chatbot addiction---why users become addicted, the symptoms commonly reported, and the distinct types it comprises. We conducted a thematic analysis of Reddit entries (n=334) across 14 subreddits where users narrated their experiences with addictive AI chatbot use, followed by an exploratory data analysis. We found: (1) users' dependence tied to the "AI Genie" phenomenon---users can get exactly anything they want with minimal effort---and marked by symptoms that align with addiction literature, (2) three distinct addiction types: Escapist Roleplay, Pseudosocial Companion, and Epistemic Rabbit Hole, (3) sexual content involved in multiple cases, and (4) recovery strategies' perceived helpfulness differ between addiction types. Our work lays empirical groundwork to inform future strategies for prevention, diagnosis, and intervention.
Generative AI's humanlike qualities are driving its rapid adoption in professional domains. However, this anthropomorphic appeal raises concerns from HCI and responsible AI scholars about potential hazards and harms, such as overtrust in system outputs. To investigate how technology workers navigate these humanlike qualities and anticipate emergent harms, we conducted focus groups with 30 professionals across six job functions (ML engineering, product policy, UX research and design, product management, technology writing, and communications). Our findings reveal an unsettled knowledge environment surrounding humanlike generative AI, where workers' varying perspectives illuminate a range of potential risks for individuals, knowledge work fields, and society. We argue that workers require comprehensive support, including clearer conceptions of ``humanlikeness'' to effectively mitigate these risks. To aid in mitigation strategies, we provide a conceptual map articulating the identified hazards and their connection to conflated notions of ``humanlikeness.''