Paper Type

ERF

Abstract

Generative artificial intelligence (GAI) is reshaping society. Although GAI offers numerous benefits, it also reinforces algorithmic biases in ways that often disadvantage already marginalized communities. Although Large Language Model (LLM) bias is a topic of increasing interest, one form of bias, epistemic bias, has been largely overlooked. In this paper, we discuss how GAI-based epistemic bias can manifest in epistemic injustice in ways that reduce individual and collective well-being. We synthesize three theories, Fricker’s epistemic injustice theory, the capabilities approach, and standpoint theory, to conceptualize a multi-level framework for understanding epistemic injustice and its effects on individual and collective well-being. We also illustrate how identifying key assumptions underlying these theories can be used to derive a robust research agenda that can help us better understand epistemic injustice and mitigate its effects.

Paper Number

1731

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1731

Comments

IntelFuture

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Epistemic Injustice in Generative AI

Generative artificial intelligence (GAI) is reshaping society. Although GAI offers numerous benefits, it also reinforces algorithmic biases in ways that often disadvantage already marginalized communities. Although Large Language Model (LLM) bias is a topic of increasing interest, one form of bias, epistemic bias, has been largely overlooked. In this paper, we discuss how GAI-based epistemic bias can manifest in epistemic injustice in ways that reduce individual and collective well-being. We synthesize three theories, Fricker’s epistemic injustice theory, the capabilities approach, and standpoint theory, to conceptualize a multi-level framework for understanding epistemic injustice and its effects on individual and collective well-being. We also illustrate how identifying key assumptions underlying these theories can be used to derive a robust research agenda that can help us better understand epistemic injustice and mitigate its effects.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.