This study addresses privacy concerns in conversational Artificial Intelligence (AI) applications, specifically focusing on generative AI chat tools based on Large Language Models (LLMs). We first discuss the privacy risks associated with the use of LLM-based chat applications in business contexts, then, propose a method to identify and modify sensitive user input to less sensitive alternatives. Our goal is to mitigate privacy risks while maintaining the quality of the generated responses. We are planning to evaluate the effectiveness of our proposed method through an experimental study using real-world data based on privacy and quality metrics defined.