Version control is critical in mechanical computer-aided design (CAD) to enable traceability, manage product variation, and support collaboration. Yet, its implementation in modern CAD software as an essential information infrastructure for product development remains plagued by issues due to the complexity and interdependence of design data. This paper presents a systematic review of user-reported challenges with version control in modern CAD tools. Analyzing 170 online forum threads, we identify recurring socio-technical issues that span the management, continuity, scope, and distribution of versions. Our findings inform a broader reflection on how version control should be designed and improved for CAD and motivate opportunities for tools and mechanisms that better support articulation work, facilitate cross-boundary collaboration, and operate with infrastructural reflexivity. This study offers actionable insights for CAD software providers and highlights opportunities for researchers to rethink version control.
Quantitative vulnerability assessment is central to security management, guiding how risks are prioritized and mitigated. Yet, severity scoring relies on human judgment and is therefore subject to differences in experience, interpretation, and diligence; prior work has even shown expert disagreement. We examine an NLP-based assistive tool that visualizes keyword cues during assessment. In a controlled survey of 389 participants recruited via Amazon MTurk and Prolific, we statistically analyze how participant skills/demographics, vulnerability characteristics, and tool support affect outcomes. Results show the tool does not consistently improve assessment accuracy across expertise levels, but can help for specific vulnerability types (e.g., CWE-787) and CVSS metrics (AC, PR, Scope), and can increase user confidence. Beyond immediate performance, the tool can support training for manual assessment tasks that are hard to automate, as learning effects yield significant improvements on subsequent tasks. This work informs the design of cybersecurity decision-support tools and motivates future research on security training and human-centered security.
The rise of AI-powered tools like ChatGPT enables non-programmers to bypass programming screening questions, undermining internal validity in usable security and privacy, and software engineering studies. Past ChatGPT-resistant tasks proposed static visual questions, which ChatGPT can now circumvent. Therefore, we tested alternative approaches such as video- and audio-based screeners that reveal key information step by step under strict time constraints to distinguish programmers from non-programmers. To this end, we conducted a study with 74 participants across three groups: programmers, non-programmers without AI assistance, and non-programmers using ChatGPT. Our results showed that audio-based screeners were robust against ChatGPT-based cheating, as non-programmers struggled to find correct answers within time limits, whereas programmers demonstrated high accuracy with minimal time pressure. Based on our findings, we recommend six audio-based ChatGPT-resistant screening questions that maximize screening effectiveness and efficiency and suggest a 215-second instrument that includes 95.87% of programmers while excluding 99.69% of non-programmers.
Past research showed that software developers often require explicit instructions to implement security measures. With the rapid rise of AI assistant tools such as ChatGPT, it remains unclear whether AI assistance supports or undermines secure practices, whether explicit security instructions are still essential, and how developers behave without guidance. To investigate these research questions, we conducted a qualitative lab study with 21 computer science students and a quantitative online study with 80 freelance developers. We focused on secure password storage and asked participants to implement registration logic under four conditions: without instructions, with AI assistance, with security instructions, or with both AI assistance and security instructions. Our study reveals a clear behavioral shift: In our task, many participants relied on AI-assisted code generation for security-related tasks, often prioritizing convenience over security. However, explicit security-focused instructions can redirect this behavior toward secure outcomes, demonstrating that AI tools alone are insufficient without targeted guidance.
Small and medium-sized enterprises (SMEs) are facing growing cybersecurity threats amidst limited resources and regulatory complexity. This complexity stems from diverse stakeholders in the regulatory process, including policymakers, industry associations, and companies that must implement the regulations. Misalignments between these different stakeholders can further compound the complexity. Against this backdrop, we investigate the cybersecurity mental models held by three stakeholder groups in Denmark’s defence sector and how these mental models might influence regulatory processes. Using a qualitative approach combining focus groups with 6 policymakers, 11 policy promoters (industry associations), and 12 policy implementers (SMEs), we reveal key misalignments in perceptions of risk, threats, cyber readiness, and policy interpretation. Our findings further show that SMEs often treat cybersecurity as a compliance task, while policymakers assume strategic readiness. Based on our results, we suggest recommendations for aligning governance frameworks with organisational realities.
In response to calls for open data and growing privacy threats, organizations are increasingly adopting privacy-preserving techniques that add noise to published datasets. These techniques seek to protect privacy of data subjects while enabling useful analyses. With expert feedback, we developed empirically-driven documentation explaining the noise characteristics of two Wikipedia pageview datasets: one using rounding (heuristic privacy) and another using differential privacy (DP, formal privacy). We then used these documents to conduct a task-based contextual inquiry (n=15) exploring how data users—largely unfamiliar with these methods—perceive, interact with, and interpret privacy-preserving noise during data analysis.
Participants readily used simple uncertainty metrics from the documentation, but struggled when computing confidence intervals across multiple noisy estimates. They better devised simulation-based approaches for computing uncertainty with DP-noised vs. rounded data. Surprisingly, several participants incorrectly believed DP's stronger utility implied weaker privacy protections. We offer design recommendations for documentation and tools to better support data users working with privacy-noised data.
While symbolic execution (SE) can discover software vulnerabilities, it has received limited practical adoption.
A key barrier is that SE requires human expertise to understand the program’s state and prioritize paths to analyze.
Traditionally, users controlled SE through programmatic API calls, but recent tooling now implements graphical user interfaces (GUI). However, it is unclear how these new features affect human-SE performance.
To understand this impact, we conducted a controlled experiment where 24 vulnerability discovery experts were tasked with analyzing a binary using an SE tool with either API or GUI-based features. From this study, we identify (1) experts' SE process, and (2) the impact of GUI-based features on human-SE performance. Then we propose recommendations to improve SE tool design.