Paper Type

ERF

Abstract

We propose a multi-dimensional bias analysis framework for large language models (LLMs) that addresses the limitations of existing methods in capturing complex interdependencies and bias propagation pathways. The framework categorizes bias into five hierarchical layers (Data Bias, Algorithmic Bias, Surface-Level Bias, Operational Bias, and Societal Influence) and models their cross-layer interactions through direct propagation, cascading effects, and feedback loops. This structure supports root-cause analysis and enables detailed evaluation through a dynamic ranking system. We validate our approach through case studies on ChatGPT and Gemini, showing distinct bias profiles and emphasizing the importance of systematic, multi-layered analysis for improving transparency and fairness in LLMs. The results demonstrate the framework’s potential to guide more effective bias detection and support the development of more ethical AI systems.

Paper Number

2334

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/2334

Comments

SIGDSA

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Multi-Dimensional Bias Analysis in LLMs Using Hierarchical and Interaction Models

We propose a multi-dimensional bias analysis framework for large language models (LLMs) that addresses the limitations of existing methods in capturing complex interdependencies and bias propagation pathways. The framework categorizes bias into five hierarchical layers (Data Bias, Algorithmic Bias, Surface-Level Bias, Operational Bias, and Societal Influence) and models their cross-layer interactions through direct propagation, cascading effects, and feedback loops. This structure supports root-cause analysis and enables detailed evaluation through a dynamic ranking system. We validate our approach through case studies on ChatGPT and Gemini, showing distinct bias profiles and emphasizing the importance of systematic, multi-layered analysis for improving transparency and fairness in LLMs. The results demonstrate the framework’s potential to guide more effective bias detection and support the development of more ethical AI systems.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.