"The AI tool can’t make it any worse." Investigating Developers’ Security Behavior with AI Assistants in a Password Storage Study

要旨

Past research showed that software developers often require explicit instructions to implement security measures. With the rapid rise of AI assistant tools such as ChatGPT, it remains unclear whether AI assistance supports or undermines secure practices, whether explicit security instructions are still essential, and how developers behave without guidance. To investigate these research questions, we conducted a qualitative lab study with 21 computer science students and a quantitative online study with 80 freelance developers. We focused on secure password storage and asked participants to implement registration logic under four conditions: without instructions, with AI assistance, with security instructions, or with both AI assistance and security instructions. Our study reveals a clear behavioral shift: In our task, many participants relied on AI-assisted code generation for security-related tasks, often prioritizing convenience over security. However, explicit security-focused instructions can redirect this behavior toward secure outcomes, demonstrating that AI tools alone are insufficient without targeted guidance.

著者
Asli Yardim
Ruhr University Bochum, Bochum, Germany
Raphael Serafini
University of Cologne, Cologne, Germany
Nadine Jost
Ruhr University Bochum, Bochum, Germany
Anna-Marie Ortloff
University of Bonn, Bonn, Germany
Joshua Gabriel. Speckels
University of Cologne, Cologne, Germany
Alena Naiakshina
Univeristy of Cologne, Cologne, Germany

会議: CHI 2026

ACM CHI Conference on Human Factors in Computing Systems

セッション: Privacy and Security in Software Development

Area 1 + 2 + 3: theatre
7 件の発表
2026-04-16 18:00:00
2026-04-16 19:30:00