•  
  •  
 

Journal of Information Technology

Document Type

Research Article

Abstract

This paper presents a new perspective on the problem of bias in artificial intelligence (AI)-driven decision-making by examining the fundamental difference between AI and human rationality in making sense of data. Current research has focused primarily on software engineers’ bounded rationality and bias in the data fed to algorithms but has neglected the crucial role of algorithmic rationality in producing bias. Using a Weberian distinction between formal and substantive rationality, we inquire why AI-based algorithms lack the ability to display common sense in data interpretation, leading to flawed decisions. We first conduct a rigorous text analysis to uncover and exemplify contextual nuances within the sampled data. We then combine unsupervised and supervised learning, revealing that algorithmic decision-making characterizes and judges data categories mechanically as it operates through the formal rationality of mathematical optimization procedures. Next, using an AI tool, we demonstrate how formal rationality embedded in AI-based algorithms limits its capacity to perform adequately in complex contexts, thus leading to bias and poor decisions. Finally, we delineate the boundary conditions and limitations of leveraging formal rationality to automatize algorithmic decision-making. Our study provides a deeper understanding of the rationality-based causes of AI’s role in bias and poor decisions, even when data is generated in a largely bias-free context.

DOI

10.1177/02683962231176842

Share

COinS