Paper Number

ICIS2025-1211

Paper Type

Short

Abstract

This study examines how AI washing, the gap between firms' public communication about AI capabilities and substantive AI implementation, affects data breach risks. Drawing on the tension between innovation and security in IS research, we conceptualize AI washing as strategic misalignment that creates cybersecurity vulnerabilities. Using a panel dataset of U.S. publicly traded firms (2016-2024), we develop a 2×2 typology based on "AI talk" (multi-channel communication) and "AI walk" (actual AI investments) to identify four strategic profiles: AI washing, AI vocal, AI silent, and AI downplaying firms. We propose that AI washing firms face higher breach risks than other strategic types due to resource misallocation, increased target attractiveness, and weakened internal security posture. Further, we theorize that internal CSR mitigates insider breach risks while external CSR amplifies external breach risks for AI washing firms. Our framework advances understanding of how symbolic AI engagement shapes organizational vulnerabilities and how CSR practices moderate these security outcomes.

Comments

05-ResponsibleIS

Share

COinS
 
Dec 14th, 12:00 AM

Walking the AI Talk: AI Washing and Data Breach Risks

This study examines how AI washing, the gap between firms' public communication about AI capabilities and substantive AI implementation, affects data breach risks. Drawing on the tension between innovation and security in IS research, we conceptualize AI washing as strategic misalignment that creates cybersecurity vulnerabilities. Using a panel dataset of U.S. publicly traded firms (2016-2024), we develop a 2×2 typology based on "AI talk" (multi-channel communication) and "AI walk" (actual AI investments) to identify four strategic profiles: AI washing, AI vocal, AI silent, and AI downplaying firms. We propose that AI washing firms face higher breach risks than other strategic types due to resource misallocation, increased target attractiveness, and weakened internal security posture. Further, we theorize that internal CSR mitigates insider breach risks while external CSR amplifies external breach risks for AI washing firms. Our framework advances understanding of how symbolic AI engagement shapes organizational vulnerabilities and how CSR practices moderate these security outcomes.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.