Abstract

While artificial intelligence (AI) holds great promise for clinical decision-making, its adoption faces significant challenges. Integration into clinical workflows remains early-stage, with barriers such as clinician resistance due to trust issues, legal concerns, and system integration difficulties (Giordano et al., 2021). This research seeks to identify key obstacles to AI adoption in Clinical Decision Support (CDS) and explore frameworks to enhance its practical use in clinical settings. One of the major hurdles in acceptance of AI in CDS is the apparent "black box" nature of some AI systems that undermine clinician trust. Providing explanations about how AI arrives at its recommendations (explainable AI/XAI) and ensuring transparency are crucial for building trust and encouraging the use of these systems (Saraswat et al., 2022). Other scholars (Ghassemi et al., 2021) criticizes the current XAI methods as often misleading or insufficiently validated in healthcare and advocate for rigorous empirical testing of explanation tools in real-world clinical scenarios. Seamless integration of AI tools with existing clinical workflows and IT systems, particularly EHRs, is critical for successful adoption. Systems that are intuitive, unintrusive, and easy to use have better chances of successful implementation. Poor integration can lead to increased workload and underutilization of AI tools. Many AI-based clinical decision support tools have failed in practice due to poor contextual fit, such as a lack of consideration for clinicians' workflows and the collaborative nature of their work Ulloa et al. and (Ulloa et al., 2022). They emphasize the "invisible labor" involved in integrating medical AI tools, including translating output to patient care decisions and fostering awareness of AI use. Workflow misalignment can increase workload and disrupt the physician-patient relationship. Cumbersome input methods and a lack of interoperability with existing systems like EHRs also contribute to workflow issues and hinder adoption. Finally, (Scipion et al., 2025) identify "performance expectancy" (the belief that AI will improve job performance) and "facilitating conditions" (organizational and technical infrastructure support) as universal determinants of AI acceptance among healthcare workers. Involvement of clinicians in the development, implementation, and validation of AI models is a key facilitator of acceptance, especially in specialized care. These scholarly works highlight the complex interplay between AI technology, clinical workflows, and the attitudes of healthcare professionals. The challenge of integrating AI into clinical decision-making is multifaceted, requiring technical, ethical, and practical solutions. While few academic frameworks have been proposed to address these issues, most of those frameworks focus on a specific aspect, such as transparency, trust, accountability, or usability. The goal of our research is to provide an integrative theory-based framework that encompasses all these aspects.

Comments

tpp1438

Share

COinS