Paper Type
ERF
Abstract
Generative AI models, such as ChatGPT and Deepseek, are increasingly integrated into daily life. However, concerns about reliability, cybersecurity, transparency, and data privacy are growing through their use. While Secure by Design (SbD) and Explainable AI (XAI) offer theoretical guidelines, their practical combined application to AI-generated content remains unclear. This study empirically evaluates AI security, cybersecurity and transparency using a structured interrogation method directly addressed to AI models. We assessed multiple text-based open-source and proprietary AI systems on cybersecurity claims, update transparency, and privacy compliance. Preliminary results reveal some discrepancies between AI declarations and actual adherence to SbD principles. While most models incorporate ethical safeguards, they lack clarity on security updates and data management, particularly regarding training data. We propose a user-centered audit framework to test transparency and AI security commitments. The findings emphasize the need to adapt current Secure by Design standards to AI ecosystems while ensuring verifiable transparency.
Paper Number
1675
Recommended Citation
Chung, Christian and Acquatella, Francois, "“Am I Secure by Design ?” Evaluating the Security and Transparency of GenAI: An End User-Centric Approach" (2025). AMCIS 2025 Proceedings. 40.
https://aisel.aisnet.org/amcis2025/sig_sec/sig_sec/40
“Am I Secure by Design ?” Evaluating the Security and Transparency of GenAI: An End User-Centric Approach
Generative AI models, such as ChatGPT and Deepseek, are increasingly integrated into daily life. However, concerns about reliability, cybersecurity, transparency, and data privacy are growing through their use. While Secure by Design (SbD) and Explainable AI (XAI) offer theoretical guidelines, their practical combined application to AI-generated content remains unclear. This study empirically evaluates AI security, cybersecurity and transparency using a structured interrogation method directly addressed to AI models. We assessed multiple text-based open-source and proprietary AI systems on cybersecurity claims, update transparency, and privacy compliance. Preliminary results reveal some discrepancies between AI declarations and actual adherence to SbD principles. While most models incorporate ethical safeguards, they lack clarity on security updates and data management, particularly regarding training data. We propose a user-centered audit framework to test transparency and AI security commitments. The findings emphasize the need to adapt current Secure by Design standards to AI ecosystems while ensuring verifiable transparency.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGSEC