Paper Type
ERF
Abstract
We propose a multi-dimensional bias analysis framework for large language models (LLMs) that addresses the limitations of existing methods in capturing complex interdependencies and bias propagation pathways. The framework categorizes bias into five hierarchical layers (Data Bias, Algorithmic Bias, Surface-Level Bias, Operational Bias, and Societal Influence) and models their cross-layer interactions through direct propagation, cascading effects, and feedback loops. This structure supports root-cause analysis and enables detailed evaluation through a dynamic ranking system. We validate our approach through case studies on ChatGPT and Gemini, showing distinct bias profiles and emphasizing the importance of systematic, multi-layered analysis for improving transparency and fairness in LLMs. The results demonstrate the framework’s potential to guide more effective bias detection and support the development of more ethical AI systems.
Paper Number
2334
Recommended Citation
Syed, Basil; Charlebois, Daniel Arana; Ezzati-jivan, Naser; Tahmooresnejad, Leila; and Ayanso, Anteneh, "Multi-Dimensional Bias Analysis in LLMs Using Hierarchical and Interaction Models" (2025). AMCIS 2025 Proceedings. 15.
https://aisel.aisnet.org/amcis2025/data_science/sig_dsa/15
Multi-Dimensional Bias Analysis in LLMs Using Hierarchical and Interaction Models
We propose a multi-dimensional bias analysis framework for large language models (LLMs) that addresses the limitations of existing methods in capturing complex interdependencies and bias propagation pathways. The framework categorizes bias into five hierarchical layers (Data Bias, Algorithmic Bias, Surface-Level Bias, Operational Bias, and Societal Influence) and models their cross-layer interactions through direct propagation, cascading effects, and feedback loops. This structure supports root-cause analysis and enables detailed evaluation through a dynamic ranking system. We validate our approach through case studies on ChatGPT and Gemini, showing distinct bias profiles and emphasizing the importance of systematic, multi-layered analysis for improving transparency and fairness in LLMs. The results demonstrate the framework’s potential to guide more effective bias detection and support the development of more ethical AI systems.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
SIGDSA