Paper Number

ECIS2025-1352

Paper Type

CRP

Abstract

The integration of artificial intelligence (AI) in healthcare offers significant opportunities to improve patient outcomes, reduce costs, and streamline processes. However, the successful adoption of AI depends on robust evaluation to address challenges such as regulatory compliance and ethical considerations. Despite numerous existing frameworks, the evaluation landscape remains fragmented, lacking consistency in scope, methodology, and practical applicability. Recent regulatory developments, including the EU AI Act, further underscore the need for structured and transparent evaluation processes. This study addresses these challenges by systematically analyzing existing AI evaluation frameworks in healthcare to identify key components, thematic gaps, and target audiences. Using Jabareen’s (2009) framework-building approach, we synthesize these findings into a comprehensive meta-framework that structures evaluation along an AI lifecycle. We also propose recommendations to support evaluators in selecting and applying suitable frameworks. This research contributes to the development of more consistent, actionable, and context-sensitive evaluation practices for AI in healthcare.

Author Connect URL

https://authorconnect.aisnet.org/conferences/ECIS2025/papers/ECIS2025-1352

Share

COinS
 
Jun 18th, 12:00 AM

NAVIGATING THE COMPLEXITY OF EVALUATING ARTIFICIAL INTELLIGENCE IN HEALTHCARE

The integration of artificial intelligence (AI) in healthcare offers significant opportunities to improve patient outcomes, reduce costs, and streamline processes. However, the successful adoption of AI depends on robust evaluation to address challenges such as regulatory compliance and ethical considerations. Despite numerous existing frameworks, the evaluation landscape remains fragmented, lacking consistency in scope, methodology, and practical applicability. Recent regulatory developments, including the EU AI Act, further underscore the need for structured and transparent evaluation processes. This study addresses these challenges by systematically analyzing existing AI evaluation frameworks in healthcare to identify key components, thematic gaps, and target audiences. Using Jabareen’s (2009) framework-building approach, we synthesize these findings into a comprehensive meta-framework that structures evaluation along an AI lifecycle. We also propose recommendations to support evaluators in selecting and applying suitable frameworks. This research contributes to the development of more consistent, actionable, and context-sensitive evaluation practices for AI in healthcare.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.