Keywords

AI-assisted coding; security verification; interface cues; accountability; security-compliance labels; heuristic–systematic model; secure software development

Abstract

AI coding assistants can improve developer productivity, yet AI-generated code may contain vulnerabilities, creating a tradeoff between speed and security. Drawing on the Heuristic–Systematic Model (HSM), this paper develops a research model of how AI interface cues help organizations in balancing productivity with minimum verification needed to avoid insecure code adoption. First, we theorize that security-compliance labels (e.g., “OWASP-compliant”) can be beneficial by enabling quick judgments that support productivity; however, when over relied on, they function as heuristic endorsements that reduce security verification. Second, we theorize that accountability cues can reinstate balance by shifting developers toward systematic processing and attenuating the verification-reducing effect of security-compliance labels. We propose a controlled experiment using an AI coding assistant with a 2×2 manipulation of security-compliance and accountability cues. The study advances research on AI interface cueing, security behavior, and HSM in AI-assisted coding.

Share

COinS
 
Dec 15th, 12:00 AM

Security-Compliance and Accountability: How AI Coding Interface Cues Shape Developers’ Security Verification

AI coding assistants can improve developer productivity, yet AI-generated code may contain vulnerabilities, creating a tradeoff between speed and security. Drawing on the Heuristic–Systematic Model (HSM), this paper develops a research model of how AI interface cues help organizations in balancing productivity with minimum verification needed to avoid insecure code adoption. First, we theorize that security-compliance labels (e.g., “OWASP-compliant”) can be beneficial by enabling quick judgments that support productivity; however, when over relied on, they function as heuristic endorsements that reduce security verification. Second, we theorize that accountability cues can reinstate balance by shifting developers toward systematic processing and attenuating the verification-reducing effect of security-compliance labels. We propose a controlled experiment using an AI coding assistant with a 2×2 manipulation of security-compliance and accountability cues. The study advances research on AI interface cueing, security behavior, and HSM in AI-assisted coding.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.