Paper Type

Complete

Abstract

The integration of machine learning (ML) into healthcare demands rigorous fairness assurance to prevent algorithmic biases from exacerbating disparities in treatment. This study introduces FairHealthGrid, a systematic framework for evaluating bias mitigation strategies across healthcare ML models. Our framework combines grid search with a composite fairness score, incorporating fairness metrics weighted by risk tolerances. As output, we present a trade-off map concurrently evaluating accuracy and fairness, categorizing solutions (model + bias mitigation) into the following regions: Win-Win, Good, Poor, Inverted, or Lose-Lose. We apply the framework on three different healthcare datasets. Results reveal significant variability across different healthcare applications. The framework identifies model-bias mitigation for balancing equity and accuracy, yet highlights the absence of universal solutions. By enabling systematic trade-off analysis, FairHealthGrid allows healthcare stakeholders to audit, compare, and select ethically aligned ML models for specific healthcare applications—advancing towards equitable AI in healthcare.

Paper Number

1709

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1709

Comments

IntelFuture

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

FairHealthGrid: A Systematic Framework for Evaluating Bias Mitigation Strategies in Healthcare Machine Learning

The integration of machine learning (ML) into healthcare demands rigorous fairness assurance to prevent algorithmic biases from exacerbating disparities in treatment. This study introduces FairHealthGrid, a systematic framework for evaluating bias mitigation strategies across healthcare ML models. Our framework combines grid search with a composite fairness score, incorporating fairness metrics weighted by risk tolerances. As output, we present a trade-off map concurrently evaluating accuracy and fairness, categorizing solutions (model + bias mitigation) into the following regions: Win-Win, Good, Poor, Inverted, or Lose-Lose. We apply the framework on three different healthcare datasets. Results reveal significant variability across different healthcare applications. The framework identifies model-bias mitigation for balancing equity and accuracy, yet highlights the absence of universal solutions. By enabling systematic trade-off analysis, FairHealthGrid allows healthcare stakeholders to audit, compare, and select ethically aligned ML models for specific healthcare applications—advancing towards equitable AI in healthcare.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.