Abstract

As third-party generative AI systems are increasingly integrated into enterprises, concerns arise about capability leakage. Knowledge leakage occurs when external models inadvertently absorb or expose sensitive knowledge. This includes proprietary information, business processes, techniques, and organizational know-how. Such knowledge represents competitive advantage, and if distributed to external models, can erode organizational value and compromise security. To address these risks, this research proposes a framework that quantitatively assesses the sensitivity of queries on a continuous scale. The framework provides firms with fine-grained governance over the types of knowledge permissible for sharing with external models, protecting both confidential data and critical knowledge. The framework was implemented using a test case from the energy services sector. In evaluation, it correctly classified the sensitivity of model queries with a mean squared error of 0.0291, indicating a high degree of accuracy. Implications are discussed.

Share

COinS