Privacy and Security in Software Development

会議の名前
CHI 2026
Untangling the Timeline: Challenges and Opportunities in Supporting Version Control in Modern Computer-Aided Design
要旨

Version control is critical in mechanical computer-aided design (CAD) to enable traceability, manage product variation, and support collaboration. Yet, its implementation in modern CAD software as an essential information infrastructure for product development remains plagued by issues due to the complexity and interdependence of design data. This paper presents a systematic review of user-reported challenges with version control in modern CAD tools. Analyzing 170 online forum threads, we identify recurring socio-technical issues that span the management, continuity, scope, and distribution of versions. Our findings inform a broader reflection on how version control should be designed and improved for CAD and motivate opportunities for tools and mechanisms that better support articulation work, facilitate cross-boundary collaboration, and operate with infrastructural reflexivity. This study offers actionable insights for CAD software providers and highlights opportunities for researchers to rethink version control.

著者
Yuanzhe Deng
University of Toronto, Toronto, Ontario, Canada
Shutong Zhang
University of Toronto, Toronto, Ontario, Canada
Kathy Cheng
University of Toronto, Toronto, Ontario, Canada
Alison Olechowski
University of Toronto, Toronto, Ontario, Canada
Shurui Zhou
University of Toronto, Toronto, Ontario, Canada
Tool-Assisted CVSS Vulnerability Scoring: A Controlled Quantitative Study of Human Assessment
要旨

Quantitative vulnerability assessment is central to security management, guiding how risks are prioritized and mitigated. Yet, severity scoring relies on human judgment and is therefore subject to differences in experience, interpretation, and diligence; prior work has even shown expert disagreement. We examine an NLP-based assistive tool that visualizes keyword cues during assessment. In a controlled survey of 389 participants recruited via Amazon MTurk and Prolific, we statistically analyze how participant skills/demographics, vulnerability characteristics, and tool support affect outcomes. Results show the tool does not consistently improve assessment accuracy across expertise levels, but can help for specific vulnerability types (e.g., CWE-787) and CVSS metrics (AC, PR, Scope), and can increase user confidence. Beyond immediate performance, the tool can support training for manual assessment tasks that are hard to automate, as learning effects yield significant improvements on subsequent tasks. This work informs the design of cybersecurity decision-support tools and motivates future research on security training and human-centered security.

著者
Siqi Zhang
Vrije Universiteit Amsterdam, Amsterdam, Netherlands
Minjie Cai
Carleton University, OTTAWA, Ontario, Canada
Lianying Zhao
Carleton University, Ottawa, Ontario, Canada
Xavier de Carné de Carnavalet
Radboud University, Nijmegen, Netherlands
Fabio Massacci
Vrije Universiteit Amsterdam, Amsterdam, NH, Netherlands
Mengyuan Zhang
Vrije Universiteit Amsterdam, Amsterdam, Netherlands
動画
Robust Methods for Developer Screening in Rapidly Evolving AI Contexts
要旨

The rise of AI-powered tools like ChatGPT enables non-programmers to bypass programming screening questions, undermining internal validity in usable security and privacy, and software engineering studies. Past ChatGPT-resistant tasks proposed static visual questions, which ChatGPT can now circumvent. Therefore, we tested alternative approaches such as video- and audio-based screeners that reveal key information step by step under strict time constraints to distinguish programmers from non-programmers. To this end, we conducted a study with 74 participants across three groups: programmers, non-programmers without AI assistance, and non-programmers using ChatGPT. Our results showed that audio-based screeners were robust against ChatGPT-based cheating, as non-programmers struggled to find correct answers within time limits, whereas programmers demonstrated high accuracy with minimal time pressure. Based on our findings, we recommend six audio-based ChatGPT-resistant screening questions that maximize screening effectiveness and efficiency and suggest a 215-second instrument that includes 95.87% of programmers while excluding 99.69% of non-programmers.

著者
Raphael Serafini
University of Cologne, Cologne, Germany
Nino Weber
Ruhr University Bochum, Bochum, Germany
Asli Yardim
Ruhr University Bochum, Bochum, Germany
Stefan Albert. Horstmann
Ruhr University Bochum, Bochum, Germany
Alena Naiakshina
Univeristy of Cologne, Cologne, Germany
"The AI tool can’t make it any worse." Investigating Developers’ Security Behavior with AI Assistants in a Password Storage Study
要旨

Past research showed that software developers often require explicit instructions to implement security measures. With the rapid rise of AI assistant tools such as ChatGPT, it remains unclear whether AI assistance supports or undermines secure practices, whether explicit security instructions are still essential, and how developers behave without guidance. To investigate these research questions, we conducted a qualitative lab study with 21 computer science students and a quantitative online study with 80 freelance developers. We focused on secure password storage and asked participants to implement registration logic under four conditions: without instructions, with AI assistance, with security instructions, or with both AI assistance and security instructions. Our study reveals a clear behavioral shift: In our task, many participants relied on AI-assisted code generation for security-related tasks, often prioritizing convenience over security. However, explicit security-focused instructions can redirect this behavior toward secure outcomes, demonstrating that AI tools alone are insufficient without targeted guidance.

著者
Asli Yardim
Ruhr University Bochum, Bochum, Germany
Raphael Serafini
University of Cologne, Cologne, Germany
Nadine Jost
Ruhr University Bochum, Bochum, Germany
Anna-Marie Ortloff
University of Bonn, Bonn, Germany
Joshua Gabriel. Speckels
University of Cologne, Cologne, Germany
Alena Naiakshina
Univeristy of Cologne, Cologne, Germany
"It's Confusing, Insecure, and Messy" – Mapping the Gaps Between Stakeholders' Cybersecurity Mental Models in the Danish Defence Sector
要旨

Small and medium-sized enterprises (SMEs) are facing growing cybersecurity threats amidst limited resources and regulatory complexity. This complexity stems from diverse stakeholders in the regulatory process, including policymakers, industry associations, and companies that must implement the regulations. Misalignments between these different stakeholders can further compound the complexity. Against this backdrop, we investigate the cybersecurity mental models held by three stakeholder groups in Denmark’s defence sector and how these mental models might influence regulatory processes. Using a qualitative approach combining focus groups with 6 policymakers, 11 policy promoters (industry associations), and 12 policy implementers (SMEs), we reveal key misalignments in perceptions of risk, threats, cyber readiness, and policy interpretation. Our findings further show that SMEs often treat cybersecurity as a compliance task, while policymakers assume strategic readiness. Based on our results, we suggest recommendations for aligning governance frameworks with organisational realities.

著者
Judith Kankam-Boateng
University of Southern Denmark, Odense, Denmark
Marco Peressotti
University of Southern Denmark, Odense, Denmark
Jan Stentoft
University of Southern Denmark, Kolding, Denmark
Kent Wickstrøm Jensen
University of Southern Denmark, Kolding, Denmark
Vincent Charles. Keating
University of Southern Denmark, Odense, Denmark
Louise Alison Tumchewics
University of Southern Denmark, Odense, Denmark
Olivier Schmitt
Royal Danish Academy, Copenhagen, Denmark
Amelie Theussen
Royal Danish Academy, Copenhagen, Denmark
Peter Mayer
University of Southern Denmark, Odense, Denmark
動画
"Having Confidence in My Confidence Intervals": How Data Users Engage with Privacy-Protected Wikipedia Data
要旨

In response to calls for open data and growing privacy threats, organizations are increasingly adopting privacy-preserving techniques that add noise to published datasets. These techniques seek to protect privacy of data subjects while enabling useful analyses. With expert feedback, we developed empirically-driven documentation explaining the noise characteristics of two Wikipedia pageview datasets: one using rounding (heuristic privacy) and another using differential privacy (DP, formal privacy). We then used these documents to conduct a task-based contextual inquiry (n=15) exploring how data users—largely unfamiliar with these methods—perceive, interact with, and interpret privacy-preserving noise during data analysis. Participants readily used simple uncertainty metrics from the documentation, but struggled when computing confidence intervals across multiple noisy estimates. They better devised simulation-based approaches for computing uncertainty with DP-noised vs. rounded data. Surprisingly, several participants incorrectly believed DP's stronger utility implied weaker privacy protections. We offer design recommendations for documentation and tools to better support data users working with privacy-noised data.

著者
Harold Triedman
Cornell Tech, New York, New York, United States
Jayshree Sarathy
Northeastern University, Boston, Massachusetts, United States
Priyanka Nanayakkara
Harvard University, Cambridge, Massachusetts, United States
Rachel Cummings
Columbia University, New York, New York, United States
Gabriel Kaptchuk
University of Maryland, College Park, Maryland, United States
Sean Kross
Fred Hutch Cancer Center, Seattle, Washington, United States
Elissa Redmiles
Georgetown University, Washington, District of Columbia, United States
I Can SE Clearly Now: Investigating the Effectiveness of GUI-based Symbolic Execution for Software Vulnerability Discovery
要旨

While symbolic execution (SE) can discover software vulnerabilities, it has received limited practical adoption. A key barrier is that SE requires human expertise to understand the program’s state and prioritize paths to analyze. Traditionally, users controlled SE through programmatic API calls, but recent tooling now implements graphical user interfaces (GUI). However, it is unclear how these new features affect human-SE performance. To understand this impact, we conducted a controlled experiment where 24 vulnerability discovery experts were tasked with analyzing a binary using an SE tool with either API or GUI-based features. From this study, we identify (1) experts' SE process, and (2) the impact of GUI-based features on human-SE performance. Then we propose recommendations to improve SE tool design.

著者
Yi Jou Li
Arizona State University, Tempe, Arizona, United States
Zeming Yu
Arizona State University, Tempe, Arizona, United States
James A. Mattei
Tufts University, Medford, Massachusetts, United States
Ananta Soneji
Arizona State University, Tempe, Arizona, United States
Zhibo Sun
Drexel University, Philladelphia, Pennsylvania, United States
Ruoyu “Fish” Wang
Arizona State University, Tempe, Arizona, United States
Jaron Mink
Arizona State University, Tempe, Arizona, United States
Daniel Votipka
Tufts University, Medford, Massachusetts, United States
Tiffany Bao
Arizona State University, Tempe, Arizona, United States