Abstract

The integration of generative artificial intelligence (GenAI) tools has transformed organizational workflows by enhancing efficiency, automation, and problem-solving. However, these tools also introduce a novel information security risk: unauthorized GenAI Data Disclosure (UGDD), where employees inadvertently or intentionally submit sensitive organizational data in ways that violate policies and compromise confidentiality. Existing information security behavior (ISB) frameworks focus primarily on personal or environmental predictors of misuse but rarely account for technological features or their interactions with personal and environmental factors. Drawing on the Person–Environment–Technology (PET) framework, this study theorizes how employee motivations, organizational context, and GenAI-specific technological affordances jointly shape disclosure behavior. The framework integrates personal factors, environmental factors, and technological factors to explain when and why employees engage in UGDD. This approach extends ISB theory by highlighting the interactive, AI-mediated nature of disclosure, offering a comprehensive lens for predicting and managing workplace risks.

Share

COinS