Location

Hilton Hawaiian Village, Honolulu, Hawaii

Event Website

https://hicss.hawaii.edu/

Start Date

3-1-2024 12:00 AM

End Date

6-1-2024 12:00 AM

Description

Pervasively, organizations are using artificial intelligence (AI) to augment and automate business processes. Meanwhile, ethical concerns have been raised regarding the ability of algorithms to replicate existing human biases. To this end, a plethora of technical solutions have been proffered to address algorithmic discrimination. However, according to some studies, algorithms that prioritize fairness can be less accurate in their prediction outcomes, eliciting debates about the nature of the trade-off between accuracy and fairness in deploying fair algorithms. In this study, we explicate the contexts surrounding the so-called accuracy-fairness trade-off and make the empirical case for why, when, and how the trade-offs manifest in AI systems. Using Python-generated synthetic data for the flexibility of manipulating data features, we propose a classification framework to aid the understanding of the algorithmic accuracy-fairness trade-off. Besides the theoretical contribution, our study has practical implications for designing and implementing efficient and equitable AI systems.

Share

COinS
 
Jan 3rd, 12:00 AM Jan 6th, 12:00 AM

Contextualizing the Accuracy-Fairness Tradeoff in Algorithmic Prediction Outcomes

Hilton Hawaiian Village, Honolulu, Hawaii

Pervasively, organizations are using artificial intelligence (AI) to augment and automate business processes. Meanwhile, ethical concerns have been raised regarding the ability of algorithms to replicate existing human biases. To this end, a plethora of technical solutions have been proffered to address algorithmic discrimination. However, according to some studies, algorithms that prioritize fairness can be less accurate in their prediction outcomes, eliciting debates about the nature of the trade-off between accuracy and fairness in deploying fair algorithms. In this study, we explicate the contexts surrounding the so-called accuracy-fairness trade-off and make the empirical case for why, when, and how the trade-offs manifest in AI systems. Using Python-generated synthetic data for the flexibility of manipulating data features, we propose a classification framework to aid the understanding of the algorithmic accuracy-fairness trade-off. Besides the theoretical contribution, our study has practical implications for designing and implementing efficient and equitable AI systems.

https://aisel.aisnet.org/hicss-57/sj/digital-discrimination/7