We present an experimental study that investigates how LLM-driven conversational AI tools might be weaponized to facilitate, exacerbate, or commoditize coercive control. Inspired by speculative design, we construct four scenarios that combine well-known coercive control tactics with the current capabilities of conversational AI tools. Then, we explore these scenarios via interactions with popular AI agents (ChatGPT, Gemini). We find that although AI tools refuse straightforward requests for harmful content, their guardrails can be circumvented via strategies such as gradual persuasion, splitting conversations, pre-prompting, and manipulating the AI agent's settings. Collectively, these strategies enable AI agents to be leveraged in ways that facilitate harassment, intimidation, gaslighting, monitoring, surveillance, and other coercive control tactics. To make these tools safer for everyone, we discuss opportunities for AI agents to resist being abused for coercive control via analysis of users’ conversational patterns, and ensuring that pre-programmed settings are clearly visible to prevent covert manipulation.
Managing one’s digital footprint is overwhelming, as it spans multiple platforms and involves countless context-dependent decisions. Recent advances in agentic AI offer ways forward by enabling holistic, contextual privacy-enhancing solutions. Building on this potential, we adopted a “human-as-the-unit” perspective and investigated users’ cross-context privacy challenges through 12 semi-structured interviews. Results reveal that people rely on ad hoc manual strategies while lacking comprehensive privacy controls, highlighting nine privacy-management challenges across applications, temporal contexts, and relationships. To explore solutions, we generated nine AI agent concepts and evaluated them via a speed-dating survey with 116 US participants. The three highest-ranked concepts were all post-sharing management tools with half or full agent autonomy, with users expressing greater trust in AI accuracy than in their own efforts. Our findings highlight a promising design space where users see AI agents bridging the fragments in privacy management, particularly through automated, comprehensive post-sharing remediation of users’ digital footprints.
Configuring security and privacy (S&P) settings can be challenging for non-expert users, resulting in excessive dependence on persuasive cues, such as social proofs or expert suggestions. Although such suggestions can promote protective user choices, they can be misused as deceptive patterns that steer users toward less-protective settings. This study examines (1) how source-based suggestions (public vs. experts), when combined with logical persuasive statements, influence decision-making in S&P settings under honest or deceptive conditions and (2) how users evaluate these approaches once deception is revealed. An online experiment with 1,433 U.S. participants utilizing a 2×2×2 factorial design revealed that persuasive statements amplified the effect of social proof- and authority-based cues, which persisted even when promoting less-protective settings. These findings demonstrate the importance of persuasive S&P interfaces that follow transparent and rational design, as well as complementary interventions that foster users' critical assessment and resilience against manipulation.
The dark patterns, deceptive interface designs manipulating user behaviors, have been extensively studied for their effects on human decision-making and autonomy. Yet, with the rising prominence of LLM-powered GUI agents that automate tasks from high-level intents, understanding how dark patterns affect agents is increasingly important. We present a two-phase empirical study examining how agents, human participants, and human-AI teams respond to 16 types of dark patterns across diverse scenarios. Phase 1 highlights that agents often fail to recognize dark patterns, and even when aware, prioritize task completion over protective action. Phase 2 revealed divergent failure modes: humans succumb due to cognitive shortcuts and habitual compliance, while agents falter from procedural blind spots. Human oversight improved avoidance but introduced costs such as attentional tunneling and cognitive load. Our findings show neither humans nor agents are uniformly resilient, and collaboration introduces new vulnerabilities, suggesting design needs for transparency, adjustable autonomy, and oversight.
Deceptive patterns, i.e. dark patterns and manipulative user interfaces (UI), are a widely used design method that aims to manipulate users to act against their own interests. These patterns may particularly influence people with less education, visual impairments, and older adults. Yet, access is a critical feature of the user experience (UX), development standards, and law. We considered whether and how the Web Content Accessibility Guidelines (WCAG) and related legislation, such as the European Accessibility Act (EAA), can act as a tool against deceptive patterns. We used these guidelines and legal statues in a heuristic evaluation to analyze whether and how deceptive patterns violate or conform to these standards. Although statistical analysis revealed no significant relationship, we identified three patterns implicated by the WCAG guidelines: Countdown Timer, Auto-Play, and Hidden Information. We offer this approach as one tool in the fight against UI-based deception and in support of inclusive design.
Deceptive/Manipulative Patterns (DMP) are interface designs, also known as "dark patterns," that manipulate user behavior. While considerable attention has been paid to their ethical and legal implications, empirical evidence about their real-world effects remains diffuse. This review synthesizes up-to-date experimental studies, focusing on works that quantify how (or whether) DMPs influence users. We also aggregate findings on interventions aimed at reducing DMP effects. Our synthesis highlights the experimental agreement that DMPs do significantly alter user behavior (with large variance in effect size) and that external interventions have been mostly unsuccessful in mitigating their effects. Lastly, we show that significant correlations between DMP effects and personal characteristics (e.g., age or political affiliation) are uncommon, indicating DMPs similarly affected nearly all populations tested. By summarizing the experimental evidence, we clarify the effects of DMPs, highlight gaps and tensions in the existing experimental literature, and help inform ongoing research and policy directions.
Nudges are subtle interventions designed to influence user behavior without restricting choice. Responsible gambling messages (RGMs) exemplify such nudges by encouraging safer decision-making in gambling environments. Prior research has examined how pop-up messages influence gambling behavior in experimental settings and has explored the design of effective slogan messages. However, little is known about how different types of RGMs shape users’ real-world gambling behavior and safety. To address this gap, we apply a nudging perspective to examine how RGMs support gambling safety throughout gamblers’ decision-making journey. We conducted semi-structured interviews with 22 gamblers and found that participants were generally aware of RGMs, yet some misunderstood their intended purpose. Participants perceived the safety impact of RGMs as reflected in both attitudinal and behavioral dimensions. We further discuss users’ message reception practices and the effectiveness of RGMs as nudges, and conclude with design implications for promoting gambling safety.