Abstract

There is an emerging trend of using artificial intelligence (AI) to support human in decision-making, from employee recruitment to criminal justice. It is important to design and implement human-centric AI, which calls for a thorough empirical understanding of users' fairness perceptions in practical scenarios. Drawing upon the organizational justice theory and explainable artificial intelligence (XAI) literature, we investigate how AI’s explanation and task objectivity may jointly influence the three different dimensions of users' perceived fairness, which further lead to their trusting intentions of AI decision-making systems. We suggest that XAI positively influences users’ perceived distributive, procedural, and interactional fairness as well as it is more effective for objective task than subjective task. We will test the proposed model by conducting a 3x2 experiment. We believe that this study will significantly contribute to the theoretical landscape of XAI and human-AI interaction and provide important practical contributions for AI system designers.

Comments

Paper Number 1631; Track Human; Short Paper

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.