User authentication on head-mounted displays (HMDs) relies on passwords, which are cumbersome to input and susceptible to shoulder-surfing attacks. Recent research has revealed that behavioral signals collected during common HMD tasks are highly distinctive between users. Building on these findings, this paper presents a knowledge-driven behavioral authentication system for HMDs. Our system leverages user-defined gestures as cues and trains an anomaly detector on hand joint motion signals for each user. To evaluate effectiveness, we conducted a comprehensive multi-session user study (n=20) and an observation attack study (n=10). The results show that gestures secured with joint motions are resilient to worst-case scenario video-based observation attacks (AUC = 0.97, EER = 3.58%) and maintain high recall performance over one week (AUC = 0.93, EER = 9.82%). These findings suggest that user-generated biometric hand gestures offer a promising approach to securing HMDs.
Artificial Intelligence (AI) is often framed as a transformative approach for improving accessibility, with major technology companies investing considerable resources into AI applications targeting disabled users. This investment in AI for accessibility has many benefits but remains relatively unquestioned. Through a critical discourse analysis of 126 public-facing blog posts and news articles by leading U.S.-based AI companies, our analysis reveals the ways in which technology companies render different modes of disabled participation, bestow agency upon AI as a competent and capable actor, reinforce their role in shaping AI futures, and legitimize the development of AI for accessibility. By examining tech companies' AI visions alongside Critical Disability Studies scholarship, we discuss concerns with framing AI as a means to “solve” disability-related challenges while sidestepping deeper structural questions about equity, agency, and responsibility.
Governance structures for new technologies are frequently top-down, reactive, and informally enforced, leaving marginalized communities with little power to address harms until after they occur. To address these limitations, we introduce Proactive Accountability, a conceptual framework theorizing that effective governance must be community-led, formally enforced, and continually maintained. We explore these principles through a speculative design study with Detroit's food sovereignty community, in which participants identified community-owned cooperatives---described as ``ancestral technologies''---as a model for redistributing power within a capitalist economy. Synthesizing these theoretical and empirical insights, we introduce the Designing for Proactive Accountability (D4PA) framework, providing implications for how designers can operationalize the goals of proactive accountability into HCI research and design projects. Finally, we contribute a future research agenda that positions cooperatives not merely as beneficiaries of design, but as sites of inquiry for understanding how to institutionalize justice-oriented democratic governance of sociotechnical systems.
QR codes are widely used, but can become the vector of phishing attacks (QRishing). To support users, we systematically developed a usable secure QR code scanner, SEQR (Security Enhanced QR code scanner). We based the SEQR's design on two systematic reviews: (i) of academic literature (2015–2025), identifying 96 papers on QRishing, and (ii) of the MITRE ATT&CK® Mobile repository, finding 36 QRishing techniques. From these two sources, we categorized 60 potential attacks, and divided them between those that SEQR addresses only at the technology level, and those where SEQR involves the users in the decision. We evaluated SEQR effectiveness in thwarting attacks in a between-subjects online study (n=556), where SEQR achieved 93.35% correct answers, compared to 75.24% for the Apple iOS QR code scanner and 65.11% for the Samsung Android QR code scanner. We implemented SEQR as an open source Android application, available on GitHub.
Everyday talk is often treated as casual chatter, yet it plays a crucial role in how people acquire and share knowledge. Typically, cybersecurity practices are informed by formal training, but they often overlook the impact of social exchanges. This paper investigates how informal conversations can act as a socio-technical mechanism for shaping cybersecurity awareness and practices. We conducted an online survey (N=215) where participants described recent discussions about cybersecurity, including who was involved, where they took place, and what triggered them. Quantitative and thematic analysis revealed common contexts, social settings, and topics. Most conversations occurred spontaneously in private environments, with personal experiences being the most frequent trigger. We contribute empirical insights on informal security conversations to inform the design of human-centered technologies that surface and mediate security-related discussions in everyday contexts, to ensure implicit and continuous security awareness.
Agender euphoria is a new term representing the powerful feelings of happiness, joy, and contentment derived from experiences in gender-free embodiments, spaces, and activities. People with and without agender and adjacent identities (e.g., genderless, gender-free, non-binary, gender-apathetic) may have such experiences under the right circumstances. Video games can offer gender minorities a safe haven for gender euphoric experiences. However, the possibility of agender euphoric experiences was unexplored. We considered this overlooked frame of self-actualization with 142 people who identified as having or desiring agender euphoric experiences. Using the critical incident technique (CIT), we uncovered how games and play experiences create (and inhibit) agender euphoria. We surface this experiential phenomenon and provide empirically-grounded criteria for the design of games to elicit agender euphoric experiences for everyone, but especially agender and agender adjacent players. This work adds to the growing critical literatures on marginalized experiences in games research and human-computer interaction.
So-called relatedness technologies aim to create relatedness experiences between people over distance. Typically, such technologies focus on implicit or expressive interaction, as opposed to the explicit, information-focused interaction of conventional communication technologies. Based on psychological theory, previous research has identified different design strategies for relatedness technologies such as awareness, expressivity, or gift giving. However, despite this profound theoretical understanding, designing for a fulfilling relatedness experience remains a challenging task and often conflicts with other psychological needs, such as autonomy or security. This research explores the specific potentials and barriers to the use and acceptance of relatedness technologies. Based on a comparative evaluation of five different relatedness concepts in an online study (N = 221) combining quantitative and qualitative data, we identified overarching patterns of promising design strategies for particular user groups and revealed overall need fulfillment as a central predictor of the intention to use the technology.