Abstract

Employees in business organizations are increasingly using generative AI (GenAI) applications such as ChatGPT and Gemini, as a part of daily workflows for various tasks, including content creation, brainstorming, information retrieval, and programming assistance to enhance efficiency and productivity. However, the pace of adoption makes it difficult for organizations to develop policies or train employees, raising significant concerns regarding the secure and responsible use of GenAI in professional environments (Kimbel et al., 2024; Cardon et al., 2023). The main functionality of many GenAI applications involves sharing and processing user data potentially involving personal details or sensitive information (Duffourc et al., 2024). Despite warnings against sharing sensitive information because of the privacy and security risks involved, studies indicate that users continue to disclose sensitive details. Industry reports show that over 50% of GenAI inputs include personally identifiable information, and confidential company material is frequently leaked (Menlo Security, 2024). This often occurs due to a lack of awareness or clear guidelines (Diro et al., 2025). In the absence of formal policies or clear organizational guidelines, employees may rely on self-developed mitigation strategies. However, these informal methods can be inconsistent or ineffective, potentially exacerbating privacy and security risks (Kimbel et al., 2024). Furthermore, employees' perceptions of these risks significantly influence their trust in GenAI technologies and may ultimately undermine their willingness to use such tools (Cardon et al., 2023). This study will explore these privacy and security concerns associated with GenAI use in the workplace. We plan to use a questionnaire to assess awareness, usage intentions, and risk perceptions, concluding with practical recommendations to help organizations mitigate these risks.

Comments

tpp1189

Share

COinS