Large language models (LLMs) are increasingly integrated into daily life through conversational interfaces, processing user data via natural language inputs and exhibiting advanced reasoning capabilities, which raises new concerns about user control over privacy. While much research has focused on potential privacy risks, less attention has been paid to the data control mechanisms these platforms provide. This study examines six conversational LLM platforms, analyzing how they define and implement features for users to access, edit, delete, and share data. Our analysis reveals an emerging paradigm of data control in conversational LLM platforms, where user data is generated and derived through interaction itself, natural language enables flexible yet often ambiguous control, and multi-user interactions with shared data raise questions of co-ownership and governance. Based on these findings, we offer practical insights for platform developers, policymakers, and researchers to design more effective and usable privacy controls in LLM-powered conversational interactions.
End-to-end verifiable Internet voting promises that voters can remotely check whether their ballot was recorded correctly and that all ballots were tallied as cast. However, in order to achieve an adequate level of security, voters actually need to perform the first check. Our research focuses on the cast-then-audit approach for this check. We use related work to improve this approach in particular by providing a step-by-step guide. We conducted a deceptive online user study (N=437) to compare our improved system with a baseline version from an actual election. We also measured the usability and participants confidence in using such systems. Our findings show that participants from the improved system perform significantly better than the baseline w.r.t. manipulation detecting and reporting capabilities. Furthermore, we show that it is important to distinguish between detection and reporting to understand how to further increase the overall security.
Cybersecurity anxiety captures the persistent worry, stress and perceived lack of control individuals experience when navigating digital threats. While prior research has examined privacy concerns, computer anxiety and related constructs, no validated instrument exists to specifically measure anxiety in cybersecurity contexts. We address this gap with the Cybersecurity Anxiety Scale (CybAS), a 15-item psychometric instrument developed through literature review, item generation, and multiple survey studies. CybAS consists of three factors: Present (current concerns), Future (anticipated threats), and Control (perceived control over outcomes). Analyses confirm strong reliability and validity, and the concise format makes CybAS suitable for both research and applied settings. Beyond measurement, CybAS offers HCI researchers a diagnostic framework for detecting misalignments between users’ mental models and security technologies, enabling the design of anxiety-aware security systems that directly address emotional barriers, bridging the gap between usability, trust, and security.
Modern user interfaces are complex composites, with elements originating from various sources, such as the operating system, apps, a web browser, or websites. We posit that security and privacy decisions can to some extent depend on users correctly identifying an element's source, a concept we term "surface attribution." Through two large-scale vignette-based surveys (N=4,400 and N=3,057), we present the first empirical measurement of this ability. We find that users struggle, correctly attributing UI source only 55% of the time on desktop and 53% on mobile. Familiarity and strong brand cues are associated with improved accuracy, whereas UI positioning, a long-held security design concept especially for browsers, has minimal impact. Furthermore, simply adding a "Security & Privacy" brand cue to Android permission prompts failed to improve attribution. These findings demonstrate a fundamental gap in users' mental models, indicating that relying on them to distinguish trusted UI is a fragile security paradigm.
Non-custodial wallets (NCWs) grant users full control over their keys and crypto assets, whereas custodial wallets (CWs) rely on centralized exchanges. Security breaches at major exchanges are on the rise, exemplified by the 2022 FTX fraud, yet their influence on users' security perceptions and risk mitigation behaviors remains understudied. We conducted 22 semi-structured interviews and a follow-up survey with 430 participants to address this gap concerning the FTX incident. We find that learning about FTX reduced trust in CWs and increased perceived security of NCWs. However, most users who were using non-SEC-compliant (equally risky) CWs did not transfer crypto to mitigate potential threats, showing continued trust in current wallets. Those who did often moved all funds from CWs to traditional banks rather than adopting NCWs. Notably, only one-third of survey participants were aware that centralized exchanges hold their private keys, and many still used noncompliant exchanges.
Mobile phone numbers function as single keys to banking, government, and commerce, making the Subscriber Identity Module (SIM) a critical element of security. In April 2025, South Korea’s largest carrier experienced a SIM breach that compromised authentication keys and exposed nearly 27 million subscriber identifiers. We conducted semi-structured interviews with mental-model elicitation (N=33) to examine user awareness, responses, and understanding of SIM-based authentication. Results reveal a pronounced awareness–action gap: participants recognized the breach yet held incomplete mental models, perceived little personal risk, and rarely acted protectively, even when affected. Learned helplessness, reliance on carriers, and the invisibility of SIM shaped these passive responses. Brief educational interventions improved conceptual understanding but seldom produced lasting behavioral change. Our findings demonstrate how technical opacity and psychological factors jointly inhibit protective action and offer design implications for usable security, emphasizing interventions that realign users’ mental models with system risks to foster sustainable practices.
While regulatory frameworks call for the implementation of AI certifications, empirical knowledge about how such certifications affect interactions is still scarce. In this work, we examined how AI certifications affect users' trust and reliance. In addition, we examined whether certifications elevate user expectations and whether unmet expectations subsequently reduce trust. In a 2 (certification vs no certification) x 2 (reliability: high vs low) between-subjects online study, N = 644 participants had to identify bacterial infestation in pictures with the help of an AI. Our results show that, before interacting with the AI, participants trusted the certified system more and showed reduced vigilance. However, these effects disappeared post-interaction, where, instead of the certification, system reliability significantly affected trust and vigilance. Notably, certifications did not raise expectations per se, but instead amplified the impact of system reliability on user trust. Additional exploratory results showed that the certification supported appropriate reliance.