Abstract
Artificial intelligence (AI) is revolutionizing the way we make decisions, but it is rarely perfect, and human-centric AI calls for a thorough empirical understanding of how the the theoretical fairness notion translates into perceptions of fairness in practical scenarios. Drawing upon the explainable artificial intelligence literature and elaboration likelihood model, we investigate the interaction effects of explanation specificity of AIs and issue involvement of users. We used a 3x2 experiment design with 456 participants to verify the proposed research model. We found that for individuals of low issue involvement, AI with global explanation is more effective, while AI feature-based explanation is more effective in influencing high issues involved individuals on their fairness perceptions of AI decisions, consequently leading to their trusting intentions towards AI decision-making systems. This study significantly contributes to the theoretical landscape of AI fairness and human-AI interaction, and provide important practical contributions for AI designers.
Recommended Citation
Song, Yiliao; Cui, Tingru; and Liu, Feng, "DESIGNING FAIR AI SYSTEMS: HOW EXPLANATION SPECIFICITY INFLUENCES USERS’ PERCEIVED FAIRNESS AND TRUSTING INTENTIONS" (2023). ECIS 2023 Research-in-Progress Papers. 7.
https://aisel.aisnet.org/ecis2023_rip/7