Abstract
Abstract AI conversational drift refers to the phenomenon where an AI-powered conversational agent, such as a chatbot or assistant, gradually shifts away from the original topic or intent of the conversation (Namala, 2024). This is a natural side effect of multi-turn conversations (i.e., human-AI interactions involving multiple exchanges), especially with large language models that generate responses based on patterns from previous input. It becomes a bigger problem during longer conversation/research sessions. What starts as a focused conversation can slowly fall apart as the AI veers off topic, loses context, or shifts tone. Assuming the user hasn’t unintentionally steered the AI off topic (e.g., used a different word, misspelled a word, etc..) this drift can happen due to various reasons. The AI can misinterpret follow-up questions leading to responses that are not aligned with the user's intent causing the conversation to drift away from the intended topic. Natural/conversational language is complex and often ambiguous, and AI systems may struggle to maintain context over extended interactions, especially when dealing with abstract, nuanced, or multi-faceted topics. AI systems may lack the ability to fully understand and retain context throughout the conversation. The AI system makes assumptions, it over-generalizes or brings in irrelevant context and goes off-topic. Finally, the underlying algorithms and models used by AI systems may have limitations in handling long conversations or maintaining coherence over time. According to Cesar Castro at Systems Trends, “AI models like ChatGPT work within a fixed “context window.” That means they can only “see” a limited portion of the conversation at once. As the session grows, earlier inputs are pushed out of view… It adapts quickly but that flexibility can lead it off track. It also balances competing priorities: accuracy, safety, tone, and helpfulness.” You might not recognize AI conversational drift, but you have probably experienced it if you have interacted with a generative AI system such as ChatGPT. For example, did the conversation/AI responses feel off or unhelpful, were you skeptical about the accuracy of the information, did you have to correct the AI or start over, or did you just close the chatbot and walk away? If you have had these experiences, you likely were caught in an AI conversational drift. We are proposing to identify occurrences of AI conversational drift that will lead us to explore the cause(s) of the drift. For example, if the individual types in, “Tell me about AI in medicine” and four turns later the AI responds, “Also, in the fintech industry...” that is an instance of AI conversational drift. But why did that drift occur? Was the last prompt ambiguous, has the conversation exceeded the AI model’s context memory, does the AI not understand the exact context of the inquiry? Answers to these questions may help improve model performance, the relevance and coherence of dialogue, and ultimately increase human comfort with, and acceptance of, AI systems. References Castro, C. (2025, March 24). Linkedin.com. https://www.linkedin.com/posts/cesarecastrotorres_ai-chatgpt-conversationalai-activity-7309377204214239232-4k3z/ Namala, P. K. (2024, January 16). AI chatbot precision: Techniques to avoid conversational drift. Medium.com https://phaneendrakn.medium.com/ai-chatbot-precision-techniques-to-avoid-conversational-drift-219845659a71
Recommended Citation
Armstrong, Deborah J.; Armstrong, Kenneth; and Zaza, Sam, "AI Conversation Drift" (2025). AMCIS 2025 TREOs. 15.
https://aisel.aisnet.org/treos_amcis2025/15
Comments
tpp1202