Abstract

Online platforms used in human resources (HR) generate extensive trace data, and artificial intelligence (AI) systems increasingly rely on such data to support hiring tasks, including screening and scoring applicants. As the use of these systems in organizational decision environments has increased, concerns about fairness and equity have likewise intensified, raising questions about how algorithmic bias emerges and how it can be effectively addressed without undermining predictive performance. We propose a layered Needs-Affordances-Features (NAF) framework for the design of fair AI systems. The framework conceptualizes fairness interventions across two interrelated stages. In the discovery stage, theory is used to identify mechanisms of discrimination embedded in digital trace data. In the utilization stage, these insights are translated into targeted design interventions that seek to improve fairness while preserving accuracy. Applying this framework to a real-world dataset of 2,506 applicants’ interview responses and historical hiring decisions, we identify (a) language-based discrimination and (b) interview question structure as key mechanisms shaping algorithmic fairness. We further demonstrate that modifying linguistic features and increasing interview structure can reduce subgroup disparities, with the strongest fairness improvements observed when these interventions are jointly applied. This study contributes to the information systems (IS) literature by offering a theoretically grounded and empirically validated framework that can aid fairness-aware AI system design.

DOI

10.17705/1jais.00993

Share

COinS