注目の論文一覧

各カテゴリ上位30論文までを表示しています

ACM CHI Conference on Human Factors in Computing Systems

2
VueBuds: Visual Intelligence with Wireless Earbuds
Maruchi Kim (University of Washington, Seattle, Washington, United States)Rasya Fawwaz (University of Washington, Seattle, Washington, United States)Zhi Yang Lim (University of Washington, Seattle, Washington, United States)Brinda Moudgalya (University of Washington, Seattle, Washington, United States)Hexi Wang (University of Washington, Seattle, Washington, United States)Yuanhao Zeng (University of Washington, Seattle, Washington, United States)Shyamnath Gollakota (University of Washington, Seattle, Washington, United States)
Despite their ubiquity, wireless earbuds remain audio-centric due to size and power constraints. We present VueBuds, the first camera-integrated wireless earbuds for egocentric vision, capable of operating within stringent power and form-factor limits. Each VueBud embeds a camera into a Sony WF-1000XM3 to stream visual data over Bluetooth to a host device for on-device vision language model (VLM) processing. We show analytically and empirically that while each camera's field of view is partially occluded by the face, the combined binocular perspective provides comprehensive forward coverage. By integrating VueBuds with VLMs, we build an end-to-end system for real-time scene understanding, translation, visual reasoning, and text reading; all from low-resolution monochrome cameras drawing under 5mW through on-demand activation. Through online and in-person user studies with 90 participants, we compare VueBuds against smart glasses across 17 visual question-answering tasks, and show that our system achieves response quality on par with Ray-Ban Meta. Our work establishes low-power camera-equipped earbuds as a compelling platform for visual intelligence, bringing rapidly advancing VLM capabilities to one of the most ubiquitous wearable form factors.
2
The Impact of Response Latency and Task Type on Human-LLM Interaction and Perception
Felicia Fang-Yi Tan (New York University, New York, New York, United States)Moritz Alexander. Messerschmidt (National University of Singapore, Singapore, Singapore)Wen Yin (New York University, New York, New York, United States)Oded Nov (New York University, New York, New York, United States)
Responsiveness in large language model (LLM) applications is widely assumed to be critical, yet the impact of latency on user behavior and perception of output quality has not been systematically explored. We report a controlled experiment varying time-to-first-token latency (2, 9, 20 seconds) across two taxonomy-driven knowledge task types (Creation and Advice). Log analyses reveal that user interaction behaviors were robust to latency, yet varied by task type: Creation tasks elicited more frequent prompting than Advice tasks. In contrast, participants who experienced 2-second latencies rated the LLM’s outputs less thoughtful and useful than those who experienced 9- or 20-second latencies. Participants attributed delays to AI deliberation, though long waits occasionally shifted this interpretation toward frustration or concerns about reliability. Overall, this work demonstrates that latency is not simply a cost to reduce but a tunable design variable with ethical implications. We offer design strategies for enhancing human-LLM interaction.
2
Metacognitive Demands and Strategies While Using Off-The-Shelf AI Conversational Agents for Health Information Seeking
Shri Harini Ramesh (University of Calgary, Calgary, Alberta, Canada)Foroozan Daneshzand (Simon fraser university, Burnaby, British Columbia, Canada)Babak Rashidi (Ottawa General Campus, Ottawa, Ontario, Canada)Shriti Raj (Stanford University , Palo Alto, California, United States)Hariharan Subramonyam (Stanford University, Stanford, California, United States)Fateme Rajabiyazdi (University of Calgary, Calgary, Alberta, Canada)
As Artificial Intelligence (AI) conversational agents become widespread, people are increasingly using them for health information seeking. The use of off-the-shelf conversational agents for health information seeking could place high metacognitive demands (the need for extensive monitoring and control of one's own thought process) on individuals, which could compromise their experience of seeking health information. However, currently, the specific demands that arise while using conversational agents for health information seeking, and the strategies people use to cope with those demands, remain unknown. To address these gaps, we conducted a think-aloud study with 15 participants as they sought health information using our off-the-shelf AI conversational agent. We identified the metacognitive demands such systems impose, the strategies people adopt in response, and propose considerations for designing beyond off-the-shelf interfaces to reduce these demands and support better user experiences and affordances in health information seeking.
1
Influence or Deception? Evaluating Social Suggestions with Persuasive Statements for Security and Privacy Settings
Ayako A.. Hasegawa (NICT, Tokyo, Japan)Takahiro Kasama (NICT, Tokyo, Japan)Mitsuaki Akiyama (NTT Social Informatics Laboratories, Tokyo, Japan)
Configuring security and privacy (S&P) settings can be challenging for non-expert users, resulting in excessive dependence on persuasive cues, such as social proofs or expert suggestions. Although such suggestions can promote protective user choices, they can be misused as deceptive patterns that steer users toward less-protective settings. This study examines (1) how source-based suggestions (public vs. experts), when combined with logical persuasive statements, influence decision-making in S&P settings under honest or deceptive conditions and (2) how users evaluate these approaches once deception is revealed. An online experiment with 1,433 U.S. participants utilizing a 2×2×2 factorial design revealed that persuasive statements amplified the effect of social proof- and authority-based cues, which persisted even when promoting less-protective settings. These findings demonstrate the importance of persuasive S&P interfaces that follow transparent and rational design, as well as complementary interventions that foster users' critical assessment and resilience against manipulation.
1
“Too Crowded for a Robot?”: Modeling Human Acceptance Criteria for Elevator-Riding Robots
Seoktae Kim (NAVER LABS, Seongnam, Korea, Republic of)Sangyoung Cho (NAVER LABS, Seongnam, Korea, Republic of)Kahyeon Kim (NAVER LABS, Seongnam, Korea, Republic of)Sure Bak (NAVER LABS, Seongnam, Korea, Republic of)
Robots are increasingly expected to share elevators with people, yet little is known about the conditions shaping acceptance. We introduce the Robot Boarding Area (RBA)—a designated entry zone for robots—and examine how its availability and congestion affect user evaluations. In an online survey, acceptance sharply decreased once the RBA was occupied by any person or large object, even under moderate crowding. A VR experiment confirmed this pattern and further showed that participants preferred when robots refrained from boarding in crowded conditions compared to forcing entry. By formalizing the RBA as an acceptance criterion and demonstrating the value of adaptive skip strategies, this work identifies spatial availability and boarding behavior as central to socially acceptable robot deployment in elevators.
1
"It just requires so much more creativity": Barriers and Workarounds to Gathering Information for AI Contestation
Sohini Upadhyay (Harvard University, Cambridge, Massachusetts, United States)Dasha Pruss (University of Illinois Chicago, Chicago, Illinois, United States)Alicia DeVrio (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Krzysztof Z.. Gajos (Harvard University, Allston, Massachusetts, United States)Naveena Karusala (Georgia Institute of Technology, Atlanta, Georgia, United States)
Gathering information about AI systems is essential for contesting their use; it forms the basis of arguments about how AI is causing harm. Information thus plays a central role for advocates like lawyers, journalists, and auditors contesting harmful AI systems. However, there is little systematic understanding of how these actors, many of whom are newly encountering AI in their advocacy work, access and use information effectively in this process. Understanding this information work can offer valuable insights for supporting effective contestation of harmful AI systems. To better understand information work in AI contestation, we interviewed 18 advocates in the United States (US) who have contested the use of AI in high-stakes domains, such as public benefits and housing. We characterize advocates' strategies for accessing information that is useful for contestation, including a range of creative yet resource-intensive and risky workarounds that they use to overcome opacity. We discuss implications of our findings for the effectiveness of popular transparency policy strategies in the US and offer additional ways to support the social fabric that makes advocates' information work effective.
1
"Privacy across the boundary": Examining Perceived Privacy Risk Across Data Transmission and Sharing Ranges of Smart Home Personal Assistants
Shuning Zhang (Tsinghua University, Beijing, China)Shixuan Li (Tsinghua University, Beijing, China)Haobin Xing (Tsinghua University, Beijing, China)Jiarui Liu (Tsinghua University, Beijing, China)Yan Kong (CS, Beijing, China, China)Xin Yi (Tsinghua University, Beijing, China)Kanye Ye WANG (University of Macau, Macao, China)Hewu Li (Tsinghua University, Beijing, China)
As Smart Home Personal Assistants (SPAs) evolve into social agents, understanding user privacy necessitates interpersonal communication frameworks, such as Privacy Boundary Theory (PBT). To ground our investigation, our three-phase preliminary study (1) identified transmission and sharing ranges as key boundary-related risk factors, (2) categorized relevant SPA functions and data types, and (3) analyzed commercial practices, revealing widespread data sharing and non-transparent safeguards. A subsequent mixed-methods study (N=412 survey, N=40 interviews among the survey participants) assessed users' perceived privacy risks across data types, transmission ranges and sharing ranges. Results demonstrate a significant, non-linear escalation in perceived risk when data crosses two critical boundaries: the `public network' (transmission) and `third parties' (sharing). This boundary effect holds across data types and demographics. Furthermore, risk perception is modulated by data attributes, and contextual privacy calculus. Conversely, anonymization show limited efficacy especially for third-party sharing, a finding attributed to user distrust. These findings empirically ground PBT in SPA context and inform design of boundary-aware privacy protection.
1
Building Benchmarks from the Ground Up: Community-Centered Evaluation of LLMs in Healthcare Chatbot Settings
Hamna Hamna (Microsoft Corporation, Bangalore, Karnataka, India)Gayatri Bhat (Karya, Bengaluru, India)Sourabrata Mukherjee (Microsoft Research, Bengaluru, Karnataka, India)Faisal M.. Lalani (Collective Intelligence Project, New York, New York, United States)Evan Hadfield (Collective Intelligence Project, New York, New York, United States)Divya Siddarth (Collective Intelligence Project, New York, New York, United States)Kalika Bali (Microsoft Research Lab India, Bangalore, India)Sunayana Sitaram (Microsoft Research India, Bangalore, Karnataka, India)
Large Language Models (LLMs) are typically evaluated through general or domain-specific benchmarks testing capabilities that often lack grounding in the lived realities of end users. Critical domains such as healthcare require evaluations that extend beyond artificial or simulated tasks to reflect the everyday needs, cultural practices, and nuanced contexts of communities. We propose Samiksha, a community-driven evaluation pipeline co-created with civil-society organizations (CSOs) and community members. Our approach enables scalable, automated benchmarking through a culturally aware, community-driven pipeline in which community feedback informs what to evaluate, how the benchmark is built, and how outputs are scored. We demonstrate this approach in the health domain in India. Our analysis highlights how current multilingual LLMs address nuanced community health queries, while also offering a scalable pathway for contextually grounded and inclusive LLM evaluation.
1
From Fragmentation to Integration: Exploring the Design Space of AI Agents for Human-as-the-Unit Privacy Management
Eryue Xu (University of Illinois Urbana-Champaign, Urbana, Illinois, United States)Tianshi Li (Northeastern University, Boston, Massachusetts, United States)
Managing one’s digital footprint is overwhelming, as it spans multiple platforms and involves countless context-dependent decisions. Recent advances in agentic AI offer ways forward by enabling holistic, contextual privacy-enhancing solutions. Building on this potential, we adopted a “human-as-the-unit” perspective and investigated users’ cross-context privacy challenges through 12 semi-structured interviews. Results reveal that people rely on ad hoc manual strategies while lacking comprehensive privacy controls, highlighting nine privacy-management challenges across applications, temporal contexts, and relationships. To explore solutions, we generated nine AI agent concepts and evaluated them via a speed-dating survey with 116 US participants. The three highest-ranked concepts were all post-sharing management tools with half or full agent autonomy, with users expressing greater trust in AI accuracy than in their own efforts. Our findings highlight a promising design space where users see AI agents bridging the fragments in privacy management, particularly through automated, comprehensive post-sharing remediation of users’ digital footprints.
1
Funding AI for Good: A Call for Meaningful Engagement
Hongjin Lin (Harvard University, Allston, Massachusetts, United States)Anna Kawakami (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Catherine D'Ignazio (MIT, Cambridge, Massachusetts, United States)Kenneth Holstein (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Krzysztof Z.. Gajos (Harvard University, Allston, Massachusetts, United States)
Artificial Intelligence for Social Good (AI4SG) is a growing area that explores AI's potential to address social issues, such as public health. Yet prior work has shown limited evidence of its tangible benefits for intended communities, and projects frequently face real-world deployment and sustainability challenges. While existing HCI literature on AI4SG initiatives primarily focuses on the mechanisms of funded projects and their outcomes, much less attention has been given to the upstream funding agendas that influence project approaches. In this work, we conducted a reflexive thematic analysis of 35 funding documents, representing about $410 million USD in total investments. We uncovered a spectrum of conceptual framings of AI4SG and the approaches that funding rhetoric promoted: from biasing towards technology capacities (more techno-centric) to emphasizing contextual understanding of the social problems at hand alongside technology capacities (more balanced). Drawing on our findings on how funding documents construct AI4SG, we offer recommendations for funders to embed more balanced approaches in future funding call designs. We further discuss implications for how the HCI community can positively shape AI4SG funding design processes.
1
The Golden Goose of Toxicity: Turning Hostility into Platform Revenue
Bastian Kordyaka (Åbo Akademi University, Turku, Finland)
Toxic behavior is a problem in online gaming platforms such as League of Legends (LoL), undermining player well-being and fairness. Platforms increasingly optimize “engagement” without distinguishing between positive and negative participation. Drawing on dual-process theory, we ask when hostile interactions can become economically productive. In an explanatory sequential mixed-methods study with LoL players, Study 1 (N = 430) models how reflective, System~2 brand bonds (i.e., brand personality, brand involvement, brand engagement) and negatively valenced, System~1 reactive responses (self-reported toxic behavior) relate to in-game spending. Study 2 (N = 80) uses reflexive thematic analysis to show how players interpret, repair, and channel frustration and hostility through cosmetics, events, and progression systems. Across studies, toxic behavior is positively associated with self-reported purchases and partially transmits the association between reflective brand attachments and spending. We contribute a dual-pathways account of how governance and monetization infrastructures can fold harmful engagement into value extraction, and we outline critical design provocations for centrally governed, highly monetized platforms.
1
I Felt Like I Need to Fit in Someone Else's Body - Understanding Body-Centered UX Design for Online Fashion Shopping
Margarita Osipova (Bauhaus-Universität Weimar, Weimar, Germany)Urszula Kulon (Bauhaus-Universität Weimar, Weimar, Germany)Adithi Mahesh (Bauhaus Universitaet Weimar, Weimar, Thuringia, Germany)Olesia Kirillova (Independent Researcher, Paphos, Cyprus)Marion Koelle (Hochschule RheinMain, Wiesbaden, Germany)Eva Hornecker (Bauhaus-Universität Weimar, Weimar, Germany)
Decades of online fashion retail and investment in its usability have led to a seemingly refined user experience. Yet, our study shows that female online shoppers, who make up the largest user group, experience a conflicted love-hate relationship when shopping online. Adopting a feminist HCI perspective, we contribute insights from a multi-step qualitative approach involving probes, co-design, iterative prototyping and body maps. We demonstrate that even screen-based website designs are deeply entangled with users’ embodied experiences. Through our analysis, we identify where such designs contribute to heightened emotional labour and negative user experiences. Our work offers concrete design implications centred around inclusivity, the predictive user experience of wearing and caring for garments, and transparency of information. We embody these implications in an interactive prototype and use it to validate our recommendations for a body-centred approach to UX design.
1
"It's Confusing, Insecure, and Messy" – Mapping the Gaps Between Stakeholders' Cybersecurity Mental Models in the Danish Defence Sector
Judith Kankam-Boateng (University of Southern Denmark, Odense, Denmark)Marco Peressotti (University of Southern Denmark, Odense, Denmark)Jan Stentoft (University of Southern Denmark, Kolding, Denmark)Kent Wickstrøm Jensen (University of Southern Denmark, Kolding, Denmark)Vincent Charles. Keating (University of Southern Denmark, Odense, Denmark)Louise Alison Tumchewics (University of Southern Denmark, Odense, Denmark)Olivier Schmitt (Royal Danish Academy, Copenhagen, Denmark)Amelie Theussen (Royal Danish Academy, Copenhagen, Denmark)Peter Mayer (University of Southern Denmark, Odense, Denmark)
Small and medium-sized enterprises (SMEs) are facing growing cybersecurity threats amidst limited resources and regulatory complexity. This complexity stems from diverse stakeholders in the regulatory process, including policymakers, industry associations, and companies that must implement the regulations. Misalignments between these different stakeholders can further compound the complexity. Against this backdrop, we investigate the cybersecurity mental models held by three stakeholder groups in Denmark’s defence sector and how these mental models might influence regulatory processes. Using a qualitative approach combining focus groups with 6 policymakers, 11 policy promoters (industry associations), and 12 policy implementers (SMEs), we reveal key misalignments in perceptions of risk, threats, cyber readiness, and policy interpretation. Our findings further show that SMEs often treat cybersecurity as a compliance task, while policymakers assume strategic readiness. Based on our results, we suggest recommendations for aligning governance frameworks with organisational realities.
1
Effects of Small Latency Variations in 2D Target Selection Tasks
Andreas Schmid (University of Regensburg, Regensburg, Germany)Isabell Röhr (University of Regensburg, Regensburg, Germany)Martina Emmert (University of Regensburg, Regensburg, Germany)Niels Henze (University of Regensburg, Regensburg, Germany)Raphael Wimmer (University of Regensburg, Regensburg, Germany)
Systems' latency — the time between user input and system response — slows down the human-computer interaction loop. Several studies revealed negative objective and subjective effects of high latency, typically treating latency as a constant delay. Because latency varies significantly in practice, recent work also assessed the effects of large and sudden latency changes. In practice, however, latency variations are small but frequent. As the effects of such variations are unclear, we investigate how small latency variations (+/- 50 ms) affect users' performance and perceived task load for 2D target selection tasks with static and moving targets. For static targets, we found that latency variation causes significantly higher completion times and less efficient trajectories, however with small effect sizes. In contrast, we found no significant effects on any performance measure for moving targets. Our findings indicate that the effect of latency variation is generally very small and quickly disappears for non-trivial tasks.
1
BAIT: Visual-illusion-inspired Privacy Preservation for Mobile Data Visualization
Sizhe Cheng (Nanyang Technological University, Singapore, Singapore)Songheng Zhang (Singapore Management University, Singapore, Singapore, Singapore)Dong Ma (Singapore Management University, Singapore, Singapore)Yong WANG (Nanyang Technological University, Singapore, Singapore, Singapore)
With the prevalence of mobile data visualizations, there have been growing concerns about their privacy risks, especially shoulder surfing attacks. Inspired by prior research on visual illusion, we propose BAIT, a novel approach to automatically generate privacy-preserving visualizations by stacking a decoy visualization over a given visualization. It allows visualization owners at proximity to clearly discern the original visualization and makes shoulder surfers at a distance be misled by the decoy visualization, by adjusting different visual channels of a decoy visualization (e.g., shape, position, tilt, size, color and spatial frequency). We explicitly model human perception effect at different viewing distances to optimize the decoy visualization design. Privacy-preserving examples and two in-depth user studies demonstrate the effectiveness of BAIT in both controlled lab study and real-world scenarios.
1
Access Over Deception: Fighting Deceptive Patterns through Accessibility
Tobias Pellkvist (TU Wien, Vienna, Austria)Katie Seaborn (Institute of Science Tokyo, Tokyo, Japan)Miu Kojima (Tokyo Institute of Technology, Tokyo, Japan)
Deceptive patterns, i.e. dark patterns and manipulative user interfaces (UI), are a widely used design method that aims to manipulate users to act against their own interests. These patterns may particularly influence people with less education, visual impairments, and older adults. Yet, access is a critical feature of the user experience (UX), development standards, and law. We considered whether and how the Web Content Accessibility Guidelines (WCAG) and related legislation, such as the European Accessibility Act (EAA), can act as a tool against deceptive patterns. We used these guidelines and legal statues in a heuristic evaluation to analyze whether and how deceptive patterns violate or conform to these standards. Although statistical analysis revealed no significant relationship, we identified three patterns implicated by the WCAG guidelines: Countdown Timer, Auto-Play, and Hidden Information. We offer this approach as one tool in the fight against UI-based deception and in support of inclusive design.
1
“I Wanted Them to Think That I Wrote That”: AI-Generated Self-Presentation on Dating Apps and Implications of Non-Disclosure on Informed Consent
Meryem Barkallah (University of Michigan-Flint, Flint, Michigan, United States)Douglas Zytko (University of Michigan-Flint, Flint, Michigan, United States)
Generative artificial intelligence (AI) adds unprecedented scale to capabilities for self-presentation online that may diverge from one’s physical-world identity, thus potentially misinforming consent to intimate interactions, such as in online dating. Yet there is little empirical understanding of AI-generated self-presentation and (non-)disclosure to interaction partners. We present a qualitative survey of 113 online daters who used AI-generated content in their profiles or messages seen by in-person meeting partners. Findings show that generative AI is often used to fabricate attractive dating personalities through profile text and bios, with no relevance to one’s actual identity, and is seldom disclosed to meeting partners to avoid romantic rejection. Because sexual assault is defined by mis- or under-informed consent, the study positions generative AI as a potentially significant sexual assault risk factor through its use for presentation of non-physical traits that are influential to dating outcomes yet not readily identified as AI-generated upon meeting face-to-face. Content warning: this paper discusses forms of sexual violence including rape by deception.
1
From Discovery to Decisions: Archetypal Journeys of Mobile App Users and Their Implications on Privacy
HTMA Riyadh (CISPA Helmholtz Center for Information Security, Saarbrücken, Saarland, Germany)Divyanshu Bhardwaj (CISPA Helmholtz Center for Information Security, Saarbrücken, Germany)Maria Victoria. Hellenthal (CISPA Helmholtz Center for Information Security, Saarbrücken, Germany)Alexander Hart (CISPA Helmholtz Center for Information Security, Saarbrucken, Saarland, Germany)Katharina Krombholz (CISPA − Helmholtz Center for Information Security, Saarbrücken, Germany)Sven Bugiel (CISPA Helmholtz Center for Information Security, Saarbruecken, Germany)
Mobile permission decisions are often studied at the moment a permission request appears. However, our study shows that users’ choices are shaped much earlier, across a multi-stage journey that begins with app-need recognition and unfolds through app discovery, exploration, selection, installation, and first use. Drawing on interviews with 19 U.S.\ Android users, we map this process and identify four archetypal journeys that explain how early cues, such as discovery sources, app type, and social trust, shape later permission behavior. These insights align with theoretical models like Privacy Calculus, showing how users weigh perceived benefits and risks at each step, and complement Contextual Integrity theory, explaining how social norms and information flows shape expectations and constrain privacy agency across steps. We contribute an empirically grounded framework that clarifies why permission outcomes vary across contexts. Our results reframe mobile privacy as a sequential, path-dependent process, offering implications for future design and research.
1
Scrollytelling as an Alternative Format for Privacy Policies
Gonzalo Gabriel. Méndez (Universidad Politécnica de Valencia, Valencia, Spain)Jose Such (INGENIO (CSIC-UPV), Valencia, Spain)
Privacy policies are long, complex, and rarely read, which limits their effectiveness in informed consent. We investigate scrollytelling, a scroll-driven narrative approach, as a privacy policy presentation format. We built a prototype that interleaves the full policy text with animated visuals to create a dynamic reading experience. In an online study (N=454), we compared our tool against text, two nutrition-label variants, and a standalone interactive visualization. Scrollytelling improved user experience over text, yielding higher engagement, lower cognitive load, greater willingness to adopt the format, and increased perceived clarity. It also matched other formats on comprehension accuracy and confidence, with only one nutrition-label variant performing slightly better. Changes in perceived understanding, transparency, and trust were small and statistically inconclusive. These findings suggest that scrollytelling can preserve comprehension while enhancing the experience of policy reading. We discuss design implications for accessible policy communication and identify directions for increasing transparency and user trust.
1
Constructing Everyday Well-Being: Insights from God-Saeng (God生) for Personal Informatics
Inhwa Song (Princeton University, Princeton, New Jersey, United States)Kwangyoung Lee (KAIST, Daejeon, Korea, Republic of)Janghee Cho (National University of Singapore, Singapore, Singapore)Amon Rapp (University of Turin, Torino, Italy)Hwajung Hong (KAIST, Deajeon, Korea, Republic of)
While Personal Informatics (PI) systems support behavior change, everyday well-being involves more than achieving individual target behaviors. It is shaped by cultural narratives that give actions meaning. In South Korea, the God-Saeng (God生) phenomenon—encompassing disciplined, collective, and publicly documented self-improvement practices—offers a lens into how well-being is negotiated in daily life. We conducted a 10-day probe (N=24) with bite-sized missions to examine how young adults engaged in God-Saeng. Participants relied on planning practices, accountability infrastructures, and datafication to stabilize themselves, yet these same routines also intensified pressures toward self-monitoring and performance. They navigated tensions between consistency and flexibility, authenticity and visibility, and productivity and broader values such as relationships, and reinterpreted ordinary activities through sociocultural contexts. These insights suggest design opportunities for PI systems that move beyond tracking, toward digital instruments that help users negotiate tensions, make meaning, and reflexively understand how technologies participate in their culturally and existentially situated well-being.
1
Becoming the Center of Other People's Identity Struggles: Content Creators Who Question, Critique, and Leave High-Pressure, Identity-Defining Communities via Social Media
Eddie A. Gomez Schieber (University of Georgia, Athens, Georgia, United States)Ari Schlesinger (University of Georgia, Athens, Georgia, United States)
The process of leaving high-pressure, identity-defining communities can produce profound identity changes. This leaving process propels some people to seek support online and to share their experiences publicly. We interviewed 13 social media content creators who made content as a part of, or in response to, their leaving process to understand their motivations and the ways audiences engaged with their work. We then explored how platforms transformed creators' work into collaborative spaces for social support. As creators gained audiences, their visibility introduced new incentives, obligations, and risks. Creators had to manage the challenges of maintaining safe spaces for their audiences, meeting audience expectations, and addressing heightened safety concerns for themselves. We end by discussing the networked structure of creator-centered communities, the impacts of platform on creator communities, and the emotional harms associated with being at the center of a community focused on social support.
1
SoundBubble: Finger-Bound Virtual Microphone using Headset/Glasses Beamforming
Daehwa Kim (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Chris Harrison (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Hands are the chief appendage with which we manipulate the world around us, creating sounds as they go. As such, they are a rich source of information that computers can leverage for input and context sensing. Indeed, many prior works in HCI have explored this idea by instrumenting users' hands with a microphone, often integrated into a ring, wristband, or watch. In this work, we explore an alternative bare-hands approach --- by using a microphone array integrated into a user's headset/glasses, we can use beamforming to create a virtual microphone that tracks with the user's fingers in 3D space. We show this method can capture even the subtle noise of a finger translating across surfaces, including skin-to-skin contact for micro-gestures, as well as passive widget interactions.
1
Video Game Archaeology as Hauntological Practice: A Collaborative Autoethnography in Elden Ring Shadow of the Erdtree
Florence Smith Nicholls (Queen Mary University of London, London, United Kingdom)Michael Cook (King's College London, London, United Kingdom)
Video game archaeology is a relatively new field. This can involve studying players through the traces they leave in digital game worlds, though only limited work of this kind exists. Furthermore, the potential of these methods to record ephemeral play experiences for preservation purposes has not been widely explored. We conducted an archaeological survey of five sites in Elden Ring, taking place directly before, during and after the release of a major expansion. We present what is, to our knowledge, the first collaborative autoethnography of an archaeological survey in a video game, reflecting on our recorded footage, notes and data. Through a diffractive analysis, we demonstrate the value of video game archaeology as a form of hauntological practice that allows for a deeper reflection on the knowledge production process, and in doing so contribute to the development of new interdisciplinary methodologies in HCI, archaeology and games research.
1
"I just have faith in my wallet to not mismanage my crypto": Investigating Changes in Users' Security Perceptions Post-FTX Collapse
Mingyi Liu (Georgia Institute of Technology, Atlanta, Georgia, United States)Nivedita Singh (Sungkyunkwan University, Seoul, Korea, Republic of)Jun Ho Huh (Samsung Electronics, Suwon, Korea, Republic of)Hyoungshick Kim (Sungkyunkwan University, Seoul, Korea, Republic of)Taesoo Kim (Georgia Institute of Technology, Atlanta, Georgia, United States)
Non-custodial wallets (NCWs) grant users full control over their keys and crypto assets, whereas custodial wallets (CWs) rely on centralized exchanges. Security breaches at major exchanges are on the rise, exemplified by the 2022 FTX fraud, yet their influence on users' security perceptions and risk mitigation behaviors remains understudied. We conducted 22 semi-structured interviews and a follow-up survey with 430 participants to address this gap concerning the FTX incident. We find that learning about FTX reduced trust in CWs and increased perceived security of NCWs. However, most users who were using non-SEC-compliant (equally risky) CWs did not transfer crypto to mitigate potential threats, showing continued trust in current wallets. Those who did often moved all funds from CWs to traditional banks rather than adopting NCWs. Notably, only one-third of survey participants were aware that centralized exchanges hold their private keys, and many still used noncompliant exchanges.
1
Behind the Meme: Understanding User Experiences with Memes on Social Media
Yuqi Niu (Shanghai Jiao Tong University, Shanghai, China)Dilara Keküllüoğlu (Sabanci University, İstanbul, Turkey)Weidong Qiu (Shanghai Jiao Tong University, Shanghai, China)Nadin Kokciyan (University of Edinburgh, Edinburgh, United Kingdom)
While memes enhance social interaction on social media, they can raise privacy and security concerns. Despite research on overtly toxic or unsafe memes, little attention has been given to users' experiences with seemingly safe memes and how contextual factors trigger privacy concerns. This study explores users’ comfort levels, influencing factors, underlying reasons for discomfort, and unmet needs when engaging with such memes. We first collected and analyzed 2,317 Reddit posts describing real-world meme experiences, then conducted an online survey with 324 participants to evaluate comfort across curated scenarios. Our findings reveal that perceived-safe memes can cause harm when shared inappropriately, with comfort shaped by content and context. Privacy concerns intensify with deeper involvement, strangers, and sensitive meme topics. We identified users' desire for consent and control in meme interactions. Based on our study, we make recommendations for users, developers of social media platforms and policymakers to address meme-related privacy and contextual concerns.
1
Balancing Goals, Health, and Cost: A Food Information System for Managing Complex Choices and Fostering Sustained Food Agency
Annalisa Szymanski (University of Notre Dame, South Bend, Indiana, United States)Jeongwon Jo (University of Notre Dame, South Bend, Indiana, United States)Michelle Sawwan (University of Notre Dame, South Bend, Indiana, United States)Heather Eicher-Miller (Purdue University, West Lafayette, Indiana, United States)Ann-Marie Conrado (University of Notre Dame, Notre Dame, Indiana, United States)Danielle Wood (University of Notre Dame, South Bend, Indiana, United States)Tawanna R. Dillahunt (University of Michigan, Ann Arbor, Michigan, United States)Ronald Metoyer (University of Notre Dame, South Bend, Indiana, United States)
Technology offers new opportunities to support healthier food choices, particularly for individuals in low-income communities who face systemic barriers to obtaining nutritious, affordable groceries. We introduce a novel conceptual model of grocery planning that frames food purchasing as a multi-objective optimization problem that considers cost, nutrition components, and a consumer's personal dietary goals. Guided by Zimmerman’s model of Self-Regulated Learning and prior research on food agency, we designed the Food Information System, a planning tool that provides optimized product recommendations aligned with users’ goals by integrating store inventory, prices, and nutritional data. We evaluated our system in an eight-week within-subjects intervention with 55 participants from a food-insecure community, followed by focus group sessions. While overall Healthy Eating Index scores remained largely stable, participants reported improved nutritional awareness and greater perceived agency in planning and purchasing groceries. We discuss design implications to support food agency by promoting long-term food literacy and by enhancing autonomy in making food choices.
1
BuyMate: Making AI Interventions Effective in Promoting Rational Consumption in Live Commerce
Shiyi Wang (Tsinghua university, Beijing, China)Yishan Liu (Tsinghua University, Beijing, China)Zhihang Zhu ( Department of Computer Science and Technology, Beijing, China)Jintao Liu (Xiamen University Malaysia, Sepang , Malaysia)Xuerui Ma (Jilin University, Changchun, China)Xin Guan (Rixin College, Beijing, China)Tianyang Feng (Academy of Art & Design, Beijing, China)Qingfei Zhao (Tsinghua University, Academy of Arts &Design, Beijing, China)XinZhi Zhang (Beihang University , Beijing, China)Yuan Yao (School of Architecture and Design, Beijing, Beijing, China)Haipeng Mi (Tsinghua University, Beijing, China)
Live commerce platforms frequently employ algorithmic recommendations and time-limited promotions to trigger impulsive purchases, challenging rational consumer decision-making. While existing research has identified manipulative design patterns in live commerce, significant gaps remain in understanding consumer psychological motivations and developing counter-persuasion interventions. We conducted a multi-stage formative study involving surveys (N = 116), interviews (N = 21), and co-design workshops (N = 16) to explore user preferences for rational consumption support systems. Informed by these insights, we designed BuyMate, which provides gentle, real-time rational interventions through product comparison and persuasive speech reframing. A user evaluation (N = 35) demonstrates that the system effectively reduces impulsive purchases, enhances decision autonomy, and promotes sustainable consumption. This work contributes an AI-driven counter-persuasion approach, identifies user-centered principles for adaptive interventions, and offers practical guidance for responsible AI in digital commerce.
1
Moving Beyond Passwords: Investigating the Effect of Digital Nudges on Passkey Adoption
Tobias Reittinger (University of Regensburg, Regensburg, Germany)Magdalena Glas (University of Regensburg, Regensburg, Germany)Günther Pernul (University of Regensburg, Regensburg, Germany)
Passwords suffer from major usability hurdles that foster insecure practices and undermine cybersecurity. Passkeys were introduced to address these issues, however, adoption remains low. Digital nudges offer a promising way to accelerate passkey adoption, yet research lacks empirical insight about when to nudge and which nudge types and designs are most effective. We therefore employed a mixed-methods approach to examine the impact of nudges on passkey adoption across five touchpoints in the digital user journey: During registration, login, account recovery, while in the settings menu, and during user activity. First, we conducted 15 expert interviews to identify candidate nudges and their design principles. We evaluate these nudges in a randomized controlled trial (RCT) with 3,680 participants on a commercial healthcare platform. Our results indicate that digital nudges can significantly increase passkey adoption when applied at the right touchpoints, encouraging users to move beyond passwords.
1
Tool-Assisted CVSS Vulnerability Scoring: A Controlled Quantitative Study of Human Assessment
Siqi Zhang (Vrije Universiteit Amsterdam, Amsterdam, Netherlands)Minjie Cai (Carleton University, OTTAWA, Ontario, Canada)Lianying Zhao (Carleton University, Ottawa, Ontario, Canada)Xavier de Carné de Carnavalet (Radboud University, Nijmegen, Netherlands)Fabio Massacci (Vrije Universiteit Amsterdam, Amsterdam, NH, Netherlands)Mengyuan Zhang (Vrije Universiteit Amsterdam, Amsterdam, Netherlands)
Quantitative vulnerability assessment is central to security management, guiding how risks are prioritized and mitigated. Yet, severity scoring relies on human judgment and is therefore subject to differences in experience, interpretation, and diligence; prior work has even shown expert disagreement. We examine an NLP-based assistive tool that visualizes keyword cues during assessment. In a controlled survey of 389 participants recruited via Amazon MTurk and Prolific, we statistically analyze how participant skills/demographics, vulnerability characteristics, and tool support affect outcomes. Results show the tool does not consistently improve assessment accuracy across expertise levels, but can help for specific vulnerability types (e.g., CWE-787) and CVSS metrics (AC, PR, Scope), and can increase user confidence. Beyond immediate performance, the tool can support training for manual assessment tasks that are hard to automate, as learning effects yield significant improvements on subsequent tasks. This work informs the design of cybersecurity decision-support tools and motivates future research on security training and human-centered security.
1
Augmenting Clinical Decision-Making with an Interactive and Interpretable AI Copilot: A Real-World User Study with Clinicians in Nephrology and Obstetrics
Yinghao Zhu (Peking University, Beijing, China)Dehao Sui (Peking University, Beijing, China)Zixiang Wang (Peking University, Beijing, China)Xuning Hu (Xi'an Jiaotong-Liverpool University, Suzhou, China)Lei Gu (Peking University, Beijing, China)Yifan Qi (Nankai University, Tianjin, China)Tianchen Wu (Peking University Third Hospital, Beijing, China)Ling Wang (Affiliated Xuzhou Municipal Hospital of Xuzhou Medical University, Jiangsu, China)Yuan Wei (Peking University Third Hospital, Beijing, China)Wen Tang (Peking University, Beijing, China)Zhihan Cui (Peking University, Beijing, China)Yasha Wang (Peking University, Beijing, China)Lequan Yu (The University of Hong Kong, Hong Kong, N/A, China)Ewen M Harrison (The University of Edinburgh, Edinburgh, United Kingdom)Junyi Gao (University of Edinburgh, Edinburgh, United Kingdom)Liantao Ma (Peking University, Beijing, China)
Clinician skepticism toward opaque AI hinders adoption in high-stakes healthcare. We present AICare, an interactive and interpretable AI copilot for collaborative clinical decision-making. By analyzing longitudinal electronic health records, AICare grounds dynamic risk predictions in scrutable visualizations and LLM-driven diagnostic recommendations. Through a within-subjects counterbalanced study with 16 clinicians across nephrology and obstetrics, we comprehensively evaluated AICare using objective measures (task completion time and error rate), subjective assessments (NASA-TLX, SUS, and confidence ratings), and semi-structured interviews. Our findings indicate AICare's reduced cognitive workload. Beyond performance metrics, qualitative analysis reveals that trust is actively constructed through verification, with interaction strategies diverging by expertise: junior clinicians used the system as cognitive scaffolding to structure their analysis, while experts engaged in adversarial verification to challenge the AI's logic. This work offers design implications for creating AI systems that function as transparent partners, accommodating diverse reasoning styles to augment rather than replace clinical judgment.
1
PrivWeb: Unobtrusive and Content-aware Privacy Protection For Web Agents
Shuning Zhang (Tsinghua University, Beijing, China)Yutong Jiang (Tongji University, Shanghai, China)Rongjun Ma (Aalto University , Espoo, Finland)Yuting Yang (University of Michigan, Ann Arbor, Michigan, United States)Mingyao Xu (University of Washington, Seattle, Washington, United States)Zhixin Huang (Shantou University, Shantou, China)Xin Yi (Tsinghua University, Beijing, China)Hewu Li (Tsinghua University, Beijing, China)
While web agents gained popularity by automating web interactions, their requirement for interface access introduces privacy risks that are understudied, particularly from users' perspective. Through a formative study (N=15), we found that users frequently misunderstand agent data practices, and desire unobtrusive, transparent data management. To achieve this, we developed PrivWeb, a trusted add-on on web agents that utilizes a localized LLM to anonymize private information on interfaces based on user preferences. It employs a tiered delegation to balance automation and intrusiveness, using ambient notifications for low-sensitivity data and enforces a mandatory pause for high-sensitivity data. The user study (N=14) across travel, information retrieval, shopping, and entertainment tasks showed that PrivWeb enhances perceived privacy protection and trust compared to transparency-only baselines, without increasing cognitive load. Crucially, we identified user delegation strategies: they prefer to manually execute sensitive steps for high-sensitivity data, while granting agent access to low-sensitivity data.
1
Design and Multi-level Evaluation of MAP-X: a Medically Aligned, Patient-Centered AI Explanation System
Yuyoung kim (HAII Corp., Seoul, Korea, Republic of)Minjung Kim (HAII Corp., Seoul, Korea, Republic of)Saebyeol Kim (HAII Corp., Seoul, Korea, Republic of)Sooyoun Cho (HAII Corp., Seoul, Korea, Republic of)Jinwoo Kim (HAII Corp., Seoul, Korea, Republic of)
Health artificial intelligence (AI) is often developed in high-stakes, data-scarce contexts, where both clinical validity and patient comprehension are critical; however, rigorous, multi-level evaluation of explanations in real-world patient-facing settings remains challenging. To enhance patient understanding and trust, we propose a practical blueprint for designing and evaluating medically aligned, patient-centered explanation (MAP-X). We propose this blueprint through MAP-X, a system that employs a large language model (LLM) with retrieval-augmented generation (RAG) to translate clinical assessments into an understandable interface. We conducted a three-phase evaluation following a multi-level validation framework: a functional evaluation of faithfulness, a clinician evaluation of workflow suitability, and a patient evaluation of perceived understanding and trust. Our findings suggest that MAP-X may support clinical adoption. In the patient study, MAP-X showed higher reported trust and a positive trend in explanation satisfaction. Interviews suggested clearer understanding of assessment results. Overall, MAP-X produced clinically relevant explanations with reasonable faithfulness and usability. Clinician oversight remains necessary.
1
Robust Methods for Developer Screening in Rapidly Evolving AI Contexts
Raphael Serafini (University of Cologne, Cologne, Germany)Nino Weber (Ruhr University Bochum, Bochum, Germany)Asli Yardim (Ruhr University Bochum, Bochum, Germany)Stefan Albert. Horstmann (Ruhr University Bochum, Bochum, Germany)Alena Naiakshina (Univeristy of Cologne, Cologne, Germany)
The rise of AI-powered tools like ChatGPT enables non-programmers to bypass programming screening questions, undermining internal validity in usable security and privacy, and software engineering studies. Past ChatGPT-resistant tasks proposed static visual questions, which ChatGPT can now circumvent. Therefore, we tested alternative approaches such as video- and audio-based screeners that reveal key information step by step under strict time constraints to distinguish programmers from non-programmers. To this end, we conducted a study with 74 participants across three groups: programmers, non-programmers without AI assistance, and non-programmers using ChatGPT. Our results showed that audio-based screeners were robust against ChatGPT-based cheating, as non-programmers struggled to find correct answers within time limits, whereas programmers demonstrated high accuracy with minimal time pressure. Based on our findings, we recommend six audio-based ChatGPT-resistant screening questions that maximize screening effectiveness and efficiency and suggest a 215-second instrument that includes 95.87% of programmers while excluding 99.69% of non-programmers.
1
Obscuring Undesirable Individuals to Alleviate Social Discomfort Using Diminished Reality
Jun Zhang (Hubei Institute of Fine Arts, Wuhan, China)Weifang Liu (Hubei Institute of Fine Arts, Wuhan, China)Xinliu Wu (Shanghai Jiao Tong University, Shanghai, China)Anan Jin (Shanghai Jiao Tong University, Shanghai, China)Baoyi Huang (Macao Polytechnic University, Macao Sar, China)Bo Liu (Shanghai Jiao Tong University, Shanghai, China)Jiaxin Zhang (Southern University of Science and Technology, Shenzhen, China)Xingyu Lan (Fudan University, Shanghai, Shanghai, China)Yan Luximon (The Hong Kong Polytechnic University, Kowloon, Hong Kong)Jie Zhang (Macao Polytechnic University, Macao, Macao, China)
In interpersonal interactions, individuals often exhibit avoidance behaviors toward others they find unpleasant, which can undermine the comfort of everyday social experiences. Existing human-computer interaction (HCI) research has primarily focused on promoting social connections, while support for avoidance-oriented social situations remains underexplored. To address this gap, we propose leveraging Diminished Reality (DR) technology to obscure perceptual cues of undesirable individuals. We designed and implemented a mixed reality prototype system and conducted experiments manipulating both the occlusion method and social distance. Results indicate that DR significantly reduces users' social anxiety and sense of social presence. Moreover, participants generally expressed positive attitudes toward usage intention and ethical considerations. This work extends HCI research on social comfort, shifting the focus from "facilitating connection" to "supporting avoidance".
1
Player Discretion is Advised: Designing for Rule-Changing Play
Doruk Balcı (University of York, York, United Kingdom)Ioanna Iacovides (University of York, York, United Kingdom)Ben Kirman (University of York, York, United Kingdom)
This paper uses research through game design to explore how we can make video games that invite players to invent their own personal play-practices through making and changing rules. Through a reflective process of designing and playtesting a multiplayer game in which changing rules and parameters is the central mechanic, we have identified how we can create opportunities for players to exert their own creative authority on the structure of their play-practices. As our contribution, we present three design themes which aim to invite player authorship on practices of gameplay: opening up digital rules and parameters, bringing internal rules to the surface, and leaving space for internal goals. We also bring a larger discussion of these design patterns in which we investigate the duality of responsibility and freedom in play when we design for player creativity, and the role of video games as tools to make metagames.
1
The Algorithmic Mirror: Knowledge Creation and Self-Perception in Dating Applications
Nadav Viduchinsky (Bar-Ilan University, Ramat-Gan, Israel)
Algorithmic dating applications mediate romance through an "algorithmic mirror," subjecting users to data-driven classifications that shape their self-perception. However, the specific strategies users employ to interpret and strategically manage this reflection remain underexplored. Understanding this dynamic is critical, as navigating the algorithmic gaze demands significant emotional labor and has profound implications for user agency and well-being. Through semi-structured interviews with 15 OkCupid users, I investigated this process of sense-making. I contribute a novel typology of three knowledge forms, Folk, Personal, and Academic, that users construct to redefine themselves against the algorithm. Theoretically, this paper frames the "algorithmic other" as a statistical counterpart to Mead's "generalized other," revealing a core "dual-audience dilemma" where users perform for both humans and machines. These findings inform the design of more transparent and contestable systems that better support user agency.
1
Certified But Imperfect: Investigating The Role of AI Certifications And System Performance on Trust in And Reliance on AI Systems
Magdalena Wischnewski (Research Center for Trustworthy Data Science and Security, Dortmund, Germany)Alisa Scharmann (University of Duisburg-Essen, Duisburg, Germany)Annika Ridder (University of Duisburg-Essen, Duisburg, Germany)Nicole Krämer (Social Psychology - Media and Communication, Universität Duisburg-Essen, Duisburg, Germany)
While regulatory frameworks call for the implementation of AI certifications, empirical knowledge about how such certifications affect interactions is still scarce. In this work, we examined how AI certifications affect users' trust and reliance. In addition, we examined whether certifications elevate user expectations and whether unmet expectations subsequently reduce trust. In a 2 (certification vs no certification) x 2 (reliability: high vs low) between-subjects online study, N = 644 participants had to identify bacterial infestation in pictures with the help of an AI. Our results show that, before interacting with the AI, participants trusted the certified system more and showed reduced vigilance. However, these effects disappeared post-interaction, where, instead of the certification, system reliability significantly affected trust and vigilance. Notably, certifications did not raise expectations per se, but instead amplified the impact of system reliability on user trust. Additional exploratory results showed that the certification supported appropriate reliance.
1
Treading the Transparency Tightrope: A Taxonomy of Risks and Benefits of Foundation Model Data Transparency for Transparency Advocates
Morgan Klaus. Scheuerman (Sony AI, Broomfield, Colorado, United States)Wiebke Hutiri (Sony AI, Zurich, Switzerland)Aida Rahmattalabi (Sony AI, Los Angeles, California, United States)Victoria Matthews (Sony AI, New York, New York, United States)Alice Xiang (Sony AI, Seattle, Washington, United States)Jerone Andrews (Sony AI, London, United Kingdom)
Data powering AI is often opaque. Researchers, NGOs, and law and policy leaders have called for greater transparency about how data is used for training, fine-tuning, and evaluation. While data transparency is often championed as crucial, what it concretely enables is largely implicit. Similarly, the concerns developers seem to have about transparency go unstated. This lack of clarity has led some researchers to critique transparency demands as disconnected from the actual benefits—or risks—to specific stakeholders. We analyze documentation from four stakeholder groups to create a taxonomy of the risks and benefits of dataset transparency. Data transparency is perceived as either a risk or a benefit given a stakeholder's position, rather than wholesale. We also propose data availability and data documentation as two lenses through which to consider transparency. We discuss how best to strategically promote situational data transparency that takes into account the relationship between stakeholder position, transparency modality, and benefits/risks.
1
HapPalm : Providing Rich Spatio-Temporal Vibrotactile Feedback on the Palm for Laptop Gaming
Yohan Yun (School of Computing, KAIST, Daejeon, Korea, Republic of)JaeHyun Kim (KAIST, Daejeon, Korea, Republic of)Geehyuk Lee (School of Computing, KAIST, Daejeon, Korea, Republic of)
While many modern gaming environments provide haptic feedback, laptop keyboard gaming remains largely without rich tactile interaction, despite a rapidly growing audience. In this paper, we propose the HapPalm interface, a novel laptop interface concept that delivers rich spatio-temporal vibrotactile feedback through the palmrest area, allowing players to feel game events with their palms. Our prototype uses dual 4×6 linear resonant actuator arrays. To render various game events with the HapPalm interface, our first study aims to create a haptic pattern dataset. Iterative design workshops identified 11 haptic pattern templates, of which our second study validated that they convincingly convey diverse game events. Our final study embedded these patterns into a custom game, showing that spatial haptics significantly improved fun, immersion, realism, and presence compared to non-spatial or no-haptic conditions. HapPalm interface demonstrates that palmrest-based haptics can enrich keyboard-only laptop gaming, providing an expressive and immersive tactile channel for future laptop interfaces.
1
Mind the SIM: Awareness and Mental Models in a South Korean Case Study
Hyunsoo Lee (KAIST, Daejeon, Korea, Republic of)Seyoung Jin (Sungkyunkwan University, Suwon, Korea, Republic of)Hyoungshick Kim (Sungkyunkwan University, Seoul, Korea, Republic of)Uichin Lee (KAIST, Daejeon, Korea, Republic of)
Mobile phone numbers function as single keys to banking, government, and commerce, making the Subscriber Identity Module (SIM) a critical element of security. In April 2025, South Korea’s largest carrier experienced a SIM breach that compromised authentication keys and exposed nearly 27 million subscriber identifiers. We conducted semi-structured interviews with mental-model elicitation (N=33) to examine user awareness, responses, and understanding of SIM-based authentication. Results reveal a pronounced awareness–action gap: participants recognized the breach yet held incomplete mental models, perceived little personal risk, and rarely acted protectively, even when affected. Learned helplessness, reliance on carriers, and the invisibility of SIM shaped these passive responses. Brief educational interventions improved conceptual understanding but seldom produced lasting behavioral change. Our findings demonstrate how technical opacity and psychological factors jointly inhibit protective action and offer design implications for usable security, emphasizing interventions that realign users’ mental models with system risks to foster sustainable practices.
1
The Impacts of Transparency and Personalization on Feelings of Agency and Connection in Democratic Decision Making
Margaret Hughes (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Cassandra Overney (Massachusetts Institute of Technology, Cambridge, Massachusetts, United States)Mahmood Jasim (Louisiana State University, Baton Rouge, Louisiana, United States)Deb Roy (MIT, Cambridge, Massachusetts, United States)
Community engagement processes often shape policies that affect people’s daily lives, yet they frequently struggle to build transparency, understanding, and agency. Civic technologies aim to address this gap by making connections between voices and decisions visible, but rarely evaluate impact on democratic participants. This study examines the effects of varying levels and types of transparency, including personalization, in technology-enabled civic decision-making on perceptions of agency, vertical and horizontal transparency, and community connection. We conducted an experiment with 266 participants who advocated for a local skate park or tennis court, and then received a decision for or against their position under varying transparency conditions. Results show that increased transparency improved perceptions of agency, vertical transparency, and horizontal transparency, but personalization had limited effects. Qualitative reflections highlighted horizontal transparency as particularly valuable for opening perspectives and enhancing participant experience. We discuss key design implications for civic technologies.
1
From Options to Action: Evaluating Adoption of Privacy Features in Fitness - Tracking Platforms
Pantelina Ioannou (University of Cyprus, Nicosia, Cyprus)Angeliki Aktypi (University of Cyprus, Nicosia, Cyprus)Elias Athanasopoulos (University of Cyprus, Nicosia, Cyprus)
Fitness-tracking platforms, such as Strava and Garmin Connect, are increasingly popular and are reshaping how people monitor and share their physical activity. Given the sensitive nature of the data users share, these platforms implement a series of privacy features, including controls for profile visibility, activity sharing, and the specification of sensitive locations.In this paper, we present the first large-scale study aiming to quantify user adoption of privacy features on fitness-tracking platforms and to shed light on the reasoning behind identified trends.We apply a mixed-method.First, we provide a systematic categorization of the privacy features implemented across major fitness-tracking platforms.We then quantify their adoption, using the Strava and Garmin Connect platforms as our case studies, by analyzing 197,873 public activity records, revealing a gap between available controls and actual adoption.We complement our empirical evaluation by surveying 182 participants, confirming low adoption and identifying barriers.Our findings highlight limited use of privacy features and provide insights into the reasons for this trend, including a lack of awareness, perceived low necessity, concerns about functionality, and difficulties adjusting settings.We also discuss potential strategies to overcome these challenges.
1
Privacy and Trust vs. Utility: Adoption of Commercial vs. Institutional AI assistants Among University Users
Yuting Yang (University of Michigan, Ann Arbor, Michigan, United States)Zixin Wang (University of Michigan, Ann Arbor, Ann Arbor, Michigan, United States)Rongjun Ma (Aalto University , Espoo, Finland)Florian Schaub (University of Michigan, Ann Arbor, Michigan, United States)
Generative AI assistants are being rapidly adopted in universities, supporting students in coursework and faculty in academic tasks. To address privacy concerns, some institutions introduced institutional AI assistants, typically wrappers around commercial models (e.g., ChatGPT) with added governance and data protections. However, university-affiliated users appear to rely more on commercial tools (e.g., ChatGPT, Gemini). We conducted a survey (n=260) at one U.S. university to examine preferences, usage scenarios, and perceptions of trust, privacy, and experience with institutional and commercial AI. Participants trusted institutional tools more and considered them more privacy protective, nevertheless commercial tools were often favored for writing, programming, and learning due to their features and utility. Findings reveal a trade-off between privacy and trust versus utility, highlighting complementary adoption patterns and design opportunities for both institutional and commercial AI in higher education.
1
I Can SE Clearly Now: Investigating the Effectiveness of GUI-based Symbolic Execution for Software Vulnerability Discovery
Yi Jou Li (Arizona State University, Tempe, Arizona, United States)Zeming Yu (Arizona State University, Tempe, Arizona, United States)James A. Mattei (Tufts University, Medford, Massachusetts, United States)Ananta Soneji (Arizona State University, Tempe, Arizona, United States)Zhibo Sun (Drexel University, Philladelphia, Pennsylvania, United States)Ruoyu “Fish” Wang (Arizona State University, Tempe, Arizona, United States)Jaron Mink (Arizona State University, Tempe, Arizona, United States)Daniel Votipka (Tufts University, Medford, Massachusetts, United States)Tiffany Bao (Arizona State University, Tempe, Arizona, United States)
While symbolic execution (SE) can discover software vulnerabilities, it has received limited practical adoption. A key barrier is that SE requires human expertise to understand the program’s state and prioritize paths to analyze. Traditionally, users controlled SE through programmatic API calls, but recent tooling now implements graphical user interfaces (GUI). However, it is unclear how these new features affect human-SE performance. To understand this impact, we conducted a controlled experiment where 24 vulnerability discovery experts were tasked with analyzing a binary using an SE tool with either API or GUI-based features. From this study, we identify (1) experts' SE process, and (2) the impact of GUI-based features on human-SE performance. Then we propose recommendations to improve SE tool design.
1
Why Johnny Checks but Doesn’t Alert: Reporting as the Missing Step in Verifiable Internet Voting
Tobias Hilt (Karlsruhe Institute of Technology, Karlsruhe, Germany)Christian Mack (Karlsruhe Institute of Technology, Karlsruhe, Germany)Benjamin Maximilian Berens (Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany)Melanie Volkamer (SECUSO, Karlsruhe Institute of Technology, Karlsruhe, Germany)
End-to-end verifiable Internet voting promises that voters can remotely check whether their ballot was recorded correctly and that all ballots were tallied as cast. However, in order to achieve an adequate level of security, voters actually need to perform the first check. Our research focuses on the cast-then-audit approach for this check. We use related work to improve this approach in particular by providing a step-by-step guide. We conducted a deceptive online user study (N=437) to compare our improved system with a baseline version from an actual election. We also measured the usability and participants confidence in using such systems. Our findings show that participants from the improved system perform significantly better than the baseline w.r.t. manipulation detecting and reporting capabilities. Furthermore, we show that it is important to distinguish between detection and reporting to understand how to further increase the overall security.
1
"The AI tool can’t make it any worse." Investigating Developers’ Security Behavior with AI Assistants in a Password Storage Study
Asli Yardim (Ruhr University Bochum, Bochum, Germany)Raphael Serafini (University of Cologne, Cologne, Germany)Nadine Jost (Ruhr University Bochum, Bochum, Germany)Anna-Marie Ortloff (University of Bonn, Bonn, Germany)Joshua Gabriel. Speckels (University of Cologne, Cologne, Germany)Alena Naiakshina (Univeristy of Cologne, Cologne, Germany)
Past research showed that software developers often require explicit instructions to implement security measures. With the rapid rise of AI assistant tools such as ChatGPT, it remains unclear whether AI assistance supports or undermines secure practices, whether explicit security instructions are still essential, and how developers behave without guidance. To investigate these research questions, we conducted a qualitative lab study with 21 computer science students and a quantitative online study with 80 freelance developers. We focused on secure password storage and asked participants to implement registration logic under four conditions: without instructions, with AI assistance, with security instructions, or with both AI assistance and security instructions. Our study reveals a clear behavioral shift: In our task, many participants relied on AI-assisted code generation for security-related tasks, often prioritizing convenience over security. However, explicit security-focused instructions can redirect this behavior toward secure outcomes, demonstrating that AI tools alone are insufficient without targeted guidance.
1
“It’s Just a Wild, Wild West”: Harnessing Public Procurement as an AI Governance Mechanism
Anna Ida Hudig (University of Cambridge, Cambridge, United Kingdom)Emma Marlene. Kallina (UA Ruhr University Duisburg-Essen, Duisburg, Germany)Jatinder Singh (University Duisburg-Essen, Duisburg, Germany)
Public sector AI has the potential to harm citizens, with risks increasing as its use expands. Recent work positions public procurement as a way to shape public sector AI in line with public interests, using the state’s purchasing power to influence which AI systems are procured and under what conditions. This paper examines how this potential can be realised in practice by drawing on semi-structured interviews with UK and EU buyers, providers, and procurement experts. Our findings result in six promising procurement practices that enable the public sector to shape AI in line with public interests, alongside concrete mechanisms to support their uptake. Further, we find that AI-specific procurement approaches remain immature and systems often enter through informal channels with less scrutiny. We provide directions for both research and practice on how public procurement can be used as a governance mechanism for better aligning AI with public interests.
1
Don't Worry, Just Follow Me: Prototyping and In-the-Wild Evaluation of Smart Pole Interaction Unit with Mobility
Vishal Chauhan (The University of Tokyo, Bunkyo, Tokyo, Japan)Anubhav Anubhav (The University of Tokyo, Tokyo, Japan)Mark Colley (UCL Interaction Centre, London, United Kingdom)Chia-Ming Chang (National Taiwan University of Arts, Taipei, Taiwan)Xinyue Gui (The University of Tokyo, Tokyo, Japan)Ding Xia (The University of Tokyo, Tokyo, Japan)Ehsan Javanmardi (The University of Tokyo, Tokyo, Japan)Takeo Igarashi (The University of Tokyo, Tokyo, Japan)Kantaro Fujiwara (University of Tokyo, Tokyo, Japan)Manabu Tsukada (The University of Tokyo, Tokyo, Japan)
Pedestrian–automated vehicle(AV) encounters in shared spaces often involve hesitation and ambiguity. Vehicle-mounted external human–machine interfaces(eHMIs) can help, but obscured or poorly timed communications create significant challenges. To address this, we present a mobile smart pole interaction unit(SPIU) with integrated cameras and LED displays, designed as a pedestrian-side system to deliver explicit cues(``WALK,'' ``STOP''). An in-the-wild evaluation of the SPIU(N=21) using a four-factor analysis (CarBehavior, Mobility, eHMI, SPIU) showed that the SPIU improved understandability, trust, and perceived safety, and reduced workload compared with the baseline, with a combination(eHMI+SPIU) yielding the strongest results. Beyond these quantitative benefits, participants appreciated the mobility of the SPIU for its ``clear'' and ``easy to decide'' mediation. This work contributes to(1) a design and deployment framework for a mobile SPIU and(2) an in-the-wild evaluation protocol for pedestrian–AV interactions in nonsignalized spaces. Our work sparks discussions on real world evaluations involving detailed vehicle kinematics and accessible multimodality(e.g., audio), focusing on the role of personal robots as user-side eHMIs.
1
Passing Down Passwords: How Older Adults Approach Postmortem Account Access and Digital Estate Planning
Jenny Tang (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Xiaoyuan Wu (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lujo Bauer (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Nicolas Christin (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)Lorrie Faith. Cranor (Carnegie Mellon University, Pittsburgh, Pennsylvania, United States)
Traditional estate planning practices enable people to provide their heirs access to the assets left behind but are often insufficient for the transfer and management of online accounts. To understand how estate planning practices could be improved, we conducted 21 semi-structured interviews with older adults in the United States that explored their practices, concerns, and needs regarding postmortem online account access and management. We encountered few formalized digital estate planning practices; many participants use their credential management practices—primarily pen-and-paper—to provide postmortem account access. How participants envision account transfer is motivated by trust in their current practices and in their heirs, while concerns regarding technology hinder adoption of new methods. Participants consistently prioritize accounts with financial assets, and expectations surrounding postmortem account management vary based on individual circumstances, with the common goal of reducing burdens on executors and heirs. Our results suggest the need for developing technical standardization and expert guidance for digital estate planning.
1
Exploring Women’s Perspectives on Learning and Trust in Automated Vehicles: A Socio-Ecological Lens
ALAA H A. ABUSAFIA (Queensland University of Technology, Brisbane, Australia)Ronald Schroeter (Queensland University of Technology (QUT), Brisbane, Australia)Alessandro Soro (Queensland University of Technology, Brisbane, Australia)
As automated vehicles (AVs) move toward mainstream adoption, understanding how users learn about and build trust in them is critical. Prior research shows that women hold safety concerns and report low trust and familiarity with AVs. While limited exposure is often cited as a cause, growing evidence indicates that women’s needs, preferences, and safety priorities remain insufficiently addressed in AV design and governance. We conducted ten dyadic and five individual semi-structured interviews with fifteen women, guided by feminist HCI principles. We then analysed findings through a socio-ecological framework to explore trust and learning. Our findings show that women's needs and expectations for AVs develop in conversation with gendered and caregiving responsibilities, and experiences of safety and vulnerability. Trust and learning co-evolve in this process as a dynamic association of forces influencing inclusive mobility. We contribute a feminist socio-ecological account of trust–learning dynamics, identifying design and policy interventions that support inclusive onboarding, institutional accountability, and community-based co-learning for equitable AV adoption.