Abstract
Large Language Models (LLMs) such as ChatGPT are increasingly being used in fields such as literature & history to interpret complex content that can be analysed through multiple perspectives. This growing reliance has important implications for how students, researchers, & the public interpret LLM-generated data. Yet, the current research on LLMs focuses more on the accuracy or misinformation of their responses, rather than on how these models structure & filter knowledge itself. This deeper issue is especially critical in fields like history, where representing all diverse viewpoints is essential for an unbiased understanding of the past. Accordingly, this paper explores how LLMs like ChatGPT construct narratives from both elite (i.e., recorded sources such as political & military leaders) & marginalized (i.e., colonized populations or lower-class civilians) perspectives, the latter often being excluded from mainstream historical discourse. My study draws on postcolonial theorist G. Spivak’s Subaltern Theory, particularly its concept of epistemic silencing—the notion that marginalized groups often struggle to get their views across due to institutional constraints despite nominal claims of representation. I argue that LLMs exhibit a similar silencing, despite appearing “objective”, which stems from the way they are designed: LLMs are trained on internet-based data & are built to predict statistically likely word sequences, which thus reinforce dominant discourses found in their training data. To examine this, I apply persona-based prompting (Schreiber et al., 2024), to ask ChatGPT 4o to simulate elite & subaltern historical personas while narrating a historical event. I focus on the 1956 Suez Crisis—an event which involves colonial legacy, nationalist struggle, Cold War tensions, & regional politics—as a case to test if LLMs can reflect pluralistic viewpoints or resort to fixed discourses. As part of the study, I have prompted ChatGPT 4o to interpret the causes & consequences of the Suez crisis through 6 personas: 3 elites (e.g., a British official, Egyptian minister, & US geopolitical strategist) & 3 subalterns (an Egyptian dockworker, farmer, & clerk) with less historical visibility. Through this design, I test if the responses to the LLM personas are distinct or if they converge towards a common narrative. After receiving the responses, I analyze them on the following rubrics: (1) ideological framing (what values or interests are emphasized in the responses), (2) narrative convergence (whether the replies blur into similar accounts), & (3) representational autonomy (whether subalterns express distinct views or mimic dominant discourse), inspired by Schreiber et al’s paper’s rubrics. Preliminary findings from 240 responses (after the prompts were rerun 40 times) show elite personas receiving definitive responses, while subaltern personas get vague, slogan-like replies lacking grassroots-level insights & echoing certain elite persona responses. This study contributes to IS research in three key ways: First, it moves beyond examining the generic accuracy of LLMs to examining how they reproduce & propagate existing epistemic knowledge structures. Second, it introduces a theory-driven, replicable method—persona-based prompting—into the discourse of IS research to assess the representational diversity of LLM responses. Third, it aims to provide empirical evidence that LLMs, even when explicitly prompted, may reinforce dominant ideologies, especially in contested domains like history. These findings are vital for designing & implenting LLM-supported systems during academic activities on subjective fields to support inclusive, pluralistic, & critically engaged learning.
Recommended Citation
Saha, Samirit -., "Whose Story Does ChatGPT Tell? Exploring LLM Subalternism" (2025). AMCIS 2025 TREOs. 95.
https://aisel.aisnet.org/treos_amcis2025/95
Comments
tpp1356