In this paper, we study the experiences of practitioners in sectoral Computer Security Incident Response Teams (CSIRTs)—specialized teams that mediate between national cybersecurity authorities and the sector constituency. Through interviews with 18 professionals connected to the Informatiebeveiligingsdienst (IBD-CSIRT) for Dutch local governments, we uncover tensions in how key services are valued. For vulnerability notifications, while the CSIRT staff consider them a core service, many constituents hardly mention them, and systemic gaps in information forwarding mean that crucial alerts often never arrive. We extend these insights with 5 interviews across other sector CSIRTs and a validation workshop with 7 participants, all security officers from sector CSIRTs, revealing shared challenges in balancing technical expertise with sector knowledge, building trust-based relationships, and navigating institutional bottlenecks. Our findings contribute the first systematic account of how sector CSIRT professionals understand and perform their role, highlighting the tensions in providing sector-wide support to professionals with differing security needs.
The workforce shortage in Security Operation Centers (SOCs) increases the need for effective training methods for aspiring cybersecurity analysts. Cyber ranges provide realistic environments for such training, yet many designs prioritize technical infrastructure while overlooking how trainees actually learn. Building on established instructional design principles, this study investigates how to improve learning in cyber range exercises. In collaboration with a commercial SOC, we enhanced an existing exercise for training Tier 1 analysts by integrating T1GER, a cyber range Learning Management System (LMS) that provides structured feedback, scaffolding, and competitive elements. We evaluated the approach in a randomized controlled trial with N=144 participants from cybersecurity courses at two European universities, who were randomly assigned to either the original LMS (control group) or the T1GER LMS (treatment group). Results showed that using T1GER led to significantly better learning experiences and shorter training times, while maintaining equivalent knowledge outcomes.
Cryptocurrency airdrops power the growth and governance of the cryptocurrency ecosystem, yet attract airdrop hunters, who coordinate wallets, script interactions, and cash out quickly, distorting metrics and fairness. Prior detection strands (heuristics/clustering, light-supervised community partitioning, and graph learning) face three fundamentals: inconsistent definitions, weak explainability, and poor cross-context generalization. We distill expert knowledge into a computable, interpretable baseline: open/axial coding of expert narratives followed by two Delphi rounds to (1) formalize a consensus, operational definition with six contrasts to regular users; (2) derive 15 measurable indicators spanning operations and fund-flow, tempered by human-ness counter-evidence; and (3) report thresholds as reference distributions (medians, quartiles). The baseline supplies shared semantics and computation for labeling/evaluation, yields inspectable why-flagged rationales for audit and governance, and offers context-aware guidance across chains, campaign designs, and market phases, thereby strengthening on-chain security while informing the design of socio-technical systems perceived as fair, trustworthy, and resistant to strategic misuse.
Algorithms of online platforms are required under the Digital Services Act (DSA) to comply with specific obligations concerning algorithmic transparency, user protection and privacy. To verify compliance with these requirements, DSA mandates platforms to undergo independent audits. Little is known about current auditing practices and their effectiveness in ensuring such compliance. To this end, we bridge regulatory and technical perspectives by critically examining selected audit reports across three critical algorithmic-related provisions: restrictions on profiling minors, transparency in recommender systems, and limitations on targeted advertising using sensitive data. Our analysis shows significant inconsistencies in methodologies and lack of technical depth when evaluating AI-powered systems. To enhance the depth, scale, and independence of compliance assessments, we propose to employ algorithmic auditing – a process of behavioural assessment of AI algorithms by means of simulating user behaviour, observing algorithm responses and analysing them for audited phenomena.
Fairness monitoring is critical for detecting algorithmic bias, as mandated by the EU AI Act. Since such monitoring requires sensitive user data (e.g., ethnicity), the AI Act permits its processing only with strict privacy measures, such as multi-party computation (MPC), in compliance with the GDPR. However, the effectiveness of such secure monitoring protocols ultimately depends on people's willingness to share their data. Little is known about how different MPC protocol designs shape user acceptance. To address this, we conducted an online survey with 833 participants in Europe, examining user acceptance of various MPC protocol designs for fairness monitoring. Findings suggest that users prioritized risk-related attributes (e.g., privacy protection mechanism) in direct evaluation but benefit-related attributes (e.g., fairness objective) in simulated choices, with acceptance shaped by their fairness and privacy orientations. We derive implications for deploying and communicating privacy-preserving protocols in ways that foster informed consent and align with user expectations.
System administrators (sysadmins) hold considerable organizational power by controlling access, enforcing or bypassing security, and mediating between users and systems. Previous research on sysadmins highlighted their technical practices, collaboration, update behavior, and misconfigurations, but rarely addressed questions of power, responsibility, and ethics within organizations. We surveyed 262 sysadmins from various countries and professional backgrounds on perceptions of power, morality, oversight, and insider threats. Sysadmins defined their power mainly through specific skills and their perceived irreplaceability, less in terms of destructive potential. Access rights played a role, but more important were how others---such as superiors---perceived them and their skills, as well as their actual decision-making authority. Demographic factors were largely irrelevant, though women were less confident than men when self-assessing their skills and power regarding their profession. Logging, dynamic privileges, and guidelines were not seen as restrictive. Sysadmins described facing ethical dilemmas and relying on their personal moral compass to work through them.