Paper Type
Complete
Abstract
This study addresses how cloud security assessment frameworks can evolve to counter risks introduced by Generative AI (GAI) in software development. Using Action Design Research, we identify gaps between the Cloud Security Alliance's Consensus Assessments Initiative Questionnaire (CAIQ) and NIST's Secure Software Development Framework (SSDF), focusing on AI-specific security considerations. We developed enhanced assessment questions addressing both traditional and AI-specific vulnerabilities, then evaluated their effectiveness by analyzing cybersecurity disclosures in 50 cloud vendors' 10-K filings using GPT-4o. Results reveal that while organizations implement traditional security measures, AI-specific practices—including model governance, adversarial testing, and incident response—remain underdeveloped. We contribute theoretical insights into evolving security frameworks and practical recommendations for strengthening cybersecurity in AI-driven cloud environments.
Paper Number
1739
Recommended Citation
Pourbehzadi, Motahareh; Le Nguyen Huong, Tra; Javidi, Giti; and Luthra, Anita, "Enhancing Cloud Security Assessment Frameworks for the Generative AI Era: An Action Design Research Approach" (2025). AMCIS 2025 Proceedings. 31.
https://aisel.aisnet.org/amcis2025/sig_sec/sig_sec/31
Enhancing Cloud Security Assessment Frameworks for the Generative AI Era: An Action Design Research Approach
This study addresses how cloud security assessment frameworks can evolve to counter risks introduced by Generative AI (GAI) in software development. Using Action Design Research, we identify gaps between the Cloud Security Alliance's Consensus Assessments Initiative Questionnaire (CAIQ) and NIST's Secure Software Development Framework (SSDF), focusing on AI-specific security considerations. We developed enhanced assessment questions addressing both traditional and AI-specific vulnerabilities, then evaluated their effectiveness by analyzing cybersecurity disclosures in 50 cloud vendors' 10-K filings using GPT-4o. Results reveal that while organizations implement traditional security measures, AI-specific practices—including model governance, adversarial testing, and incident response—remain underdeveloped. We contribute theoretical insights into evolving security frameworks and practical recommendations for strengthening cybersecurity in AI-driven cloud environments.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGSEC