Exploring the Impact of Intervention Methods on Developers’ Security Behavior in a Manipulated ChatGPT Study

要旨

Increased AI use in software development raises concerns about AI-generated code security. We investigated the impact of security prompts, insecure AI suggestion warnings, and the use of password storage guidelines (OWASP, NIST) on the security behavior of software developers when presented with insecure AI assistance. In an online lab setting, we conducted a study with 76 freelance developers who completed a password storage task divided into four conditions. Three conditions included a manipulated ChatGPT-like AI assistant, suggesting an insecure MD5 implementation. We found a high level of trust in AI-generated code, even when insecure suggestions were presented. While security prompts, AI warnings, and guidelines improved security awareness, 32% of those notified about insecure AI recommendations still accepted weak implementation suggestions, mistakenly considering it secure and often expressing confidence in their choice. Based on our results, we discuss security implications and provide recommendations for future research.

受賞
Honorable Mention
著者
Raphael Serafini
Ruhr University Bochum, Bochum, Germany
Asli Yardim
Ruhr University Bochum, Bochum, Germany
Alena Naiakshina
Ruhr University Bochum, Bochum, Germany
DOI

10.1145/3706598.3713989

論文URL

https://dl.acm.org/doi/10.1145/3706598.3713989

動画

会議: CHI 2025

The ACM CHI Conference on Human Factors in Computing Systems (https://chi2025.acm.org/)

セッション: High-Stake Situations

G302
7 件の発表
2025-04-28 23:10:00
2025-04-29 00:40:00
日本語まとめ
読み込み中…