A complex decision-making task such as making a financial investment decision requires different types of domain expertise such as quantitative analysis, risk management, and trading strategies. Expertise is domain-specific. Decision-makers’ needs for task-related domain information are unique based on their expertise levels in different task-related domains to achieve their task goals in AI-augmented decision-making. AI systems can augment humans’ decision-making by identifying patterns that humans may miss. AI systems facilitate domain experts to augment their performance and develop their decision-related domain expertise based on changes in the task context; While AI systems can help domain novices achieve their goals despite limited domain knowledge/experience and facilitate their task-related domain expertise learning for future decision-making. Deciding how much to rely on AI systems with nontransparent algorithms presents a major challenge for humans, especially in high-stakes decision-making tasks like financial investing where the decision-making outcomes are perceived to be critical to decision-makers. Without a good understanding of AI algorithms' reasoning, humans may interact with the AI systems inappropriately. According to the agentic IS artifact delegation framework, I define humans’ “AI delegation strategies” as their strategies to appraise, distribute decision rights/responsibilities between themselves and AI, and coordinate with AI systems together to achieve their decision-making goals. Adopted from the mental model theory, humans’ mental models of AI delegation strategies refer to the cognitive framework that humans employ to guide their AI delegation strategies during their AI-aided decision-making process. Humans’ appropriate AI delegation stems from the accurate mental models of their AI delegation strategies. Providing AI explanations helps humans develop accurate mental models of AI delegation strategies. AI explanations provide information to humans about how and why AI systems provide certain suggestions. there is no one-size-fits-all solution for the amount of domain information AI explanations should provide. Understanding how much task-related domain information AI explanations should provide based on humans’ expertise levels is crucial for humans to achieve AI-aided decision-making goals. This study aims to find out the diverse information needs of domain experts and novices for effective AI-aided decision-making. More specifically, this study aims to find the appropriate amounts of domain information AI explanations should convey, to calibrate domain experts' and novices’ AI delegation strategies and improve their long-term AI-aided decision-making performance. I argue that domain experts need detailed information from AI explanations to update and reinforce their mature mental models of AI delegation strategies; Unlike domain experts, domain novices need concise information from AI explanations because domain novices lack a well-structured mental model to internalize detailed domain information. Too much domain information could slow down or mislead domain novices to develop accurate mental models of AI delegation strategies. In the stock price prediction tasks, I plan to conduct an online experiment among experts and novices in quantitative analysis to test my hypotheses. I will tailor the treatment to vary the level of AI explanations’ information amount in a quantitative analysis domain. This work will extend our current understanding of AI-augmented decision-making and AI explanation interface design. It will also bridge the mental model theories and IS artifacts delegation framework by studying humans’ mental model development in AI delegation strategies during AI-augmented decision-making. It will offer guidance to the adoption of decision-augment AI system products among users with different levels of domain expertise.