Since late 2022, the adoption of generative artificial intelligence (GenAI) and large language models (LLMs) by individuals, within organizations, and across platforms has shown considerable promise in enhancing efficiency, sparking creativity, and augmenting human intelligence. However, this rather rapid deployment of technology has led to the premature automation of tasks that traditionally benefit from greater cognitive input. For example, evidence indicates that a significant portion of scientific writing and peer review has already been outsourced to LLMs (Liang et al., 2024a; 2024b). In line with this, concerns have also risen regarding the increasing dependence on GenAI, which might undermine cognitive capabilities and produce adverse organizational and societal outcomes. Hence, it stands to reason that GenAI may soon become the first choice on which people depend to complete a variety of cognitively challenging tasks, although it may fail to be the best choice without adequate human vigilance and oversight. From a psychological point of view, the decreased cognitive effort experienced during GenAI use might be tempting and encourage humans to further relinquish control, potentially at the cost of deliberative and sound judgment. While emerging literature has shed light on this tendency of “copilot becoming the pilot”, there are a few challenges in conceptualizing GenAI dependence and problematic use. First, as GenAI technologies continue to evolve and integrate in daily life, distinguishing essential reliance from problematic use becomes more difficult. Second, any overdependence may be highly context-specific and task-dependent, making it hard to draw broad conclusions. Motivated by the practical importance of the problem and the theoretical shortcomings of the literature, we propose the following research questions: “What is GenAI dependence and problematic use? What contributes to and results from it?” To answer our questions, we draw from previous research on media dependency theory, affordance theory, bounded rationality theory, and automation overdependence to propose a new and synthesized theoretical framework. We expect this framework to aid future research in conceptualizing GenAI dependence and distinguishing types of problematic use. Our proposed framework can also guide theoretical efforts to incorporate the differences in GenAI use across various contexts, such as in the workplace or the classroom. Future research could build on our framework by developing psychometric measures to assess GenAI dependence and by empirically testing the antecedents and consequences of problematic GenAI use. Additionally, exploring long-term impacts through neuroscientific studies and qualitative interviews could provide a deeper understanding of how GenAI affects cognition in positive and negative ways.