Paper Type
ERF
Abstract
Generative artificial intelligence (GAI) is reshaping society. Although GAI offers numerous benefits, it also reinforces algorithmic biases in ways that often disadvantage already marginalized communities. Although Large Language Model (LLM) bias is a topic of increasing interest, one form of bias, epistemic bias, has been largely overlooked. In this paper, we discuss how GAI-based epistemic bias can manifest in epistemic injustice in ways that reduce individual and collective well-being. We synthesize three theories, Fricker’s epistemic injustice theory, the capabilities approach, and standpoint theory, to conceptualize a multi-level framework for understanding epistemic injustice and its effects on individual and collective well-being. We also illustrate how identifying key assumptions underlying these theories can be used to derive a robust research agenda that can help us better understand epistemic injustice and mitigate its effects.
Paper Number
1731
Recommended Citation
Van Slyke, Craig; Sarabadani, Jalal; and Mosafer, Hossein, "Epistemic Injustice in Generative AI" (2025). AMCIS 2025 Proceedings. 6.
https://aisel.aisnet.org/amcis2025/intelfuture/intelfuture/6
Epistemic Injustice in Generative AI
Generative artificial intelligence (GAI) is reshaping society. Although GAI offers numerous benefits, it also reinforces algorithmic biases in ways that often disadvantage already marginalized communities. Although Large Language Model (LLM) bias is a topic of increasing interest, one form of bias, epistemic bias, has been largely overlooked. In this paper, we discuss how GAI-based epistemic bias can manifest in epistemic injustice in ways that reduce individual and collective well-being. We synthesize three theories, Fricker’s epistemic injustice theory, the capabilities approach, and standpoint theory, to conceptualize a multi-level framework for understanding epistemic injustice and its effects on individual and collective well-being. We also illustrate how identifying key assumptions underlying these theories can be used to derive a robust research agenda that can help us better understand epistemic injustice and mitigate its effects.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
IntelFuture