Abstract

Artificial intelligence (AI) systems increasingly augment or replace human agents in organizational processes. When AI takes on the role of an evaluator, it impacts the behavior and perceptions of those being evaluated. As AI is perceived as more objective, this is highly significant in currently subjective processes, such as the recruitment process, which may be especially relevant for applicants from stereotyped groups. In traditional hiring processes, a phenomenon called stereotype threat may arise. This refers to the fear that individuals from marginalized groups will confirm a negative stereotype about their social group, which might lead to reduced performance. To prevent underperformance among marginalized groups, a fair and objective process is required. Drawing on stereotype threat theory and organizational justice theory, we examine how AI can shape the behavior and perceptions of applicants applying for roles that are not typically associated with their gender (e.g., women in STEM fields). The objectivity of AI can lower the pressure related to stereotype threat, allowing individuals to perform better. We focus on a subjective task during a pre-employment assessment. This ongoing research presents a theoretical model examining how an AI evaluator affects performance in pre-employment assessments. This effect is mediated by performance pressure and performance anxiety. The effect is moderated by gender and perceived fairness. The study also analyzes at which stage of the application process (pre-selection or pre-employment assessment) using AI evaluators enables applicants to perform better. The model will be empirically tested at a later stage in an experimental study.

Abstract Only

Share

COinS