Abstract

Artificial intelligence (AI) is revolutionizing the way we make decisions, but it is rarely perfect, and human-centric AI calls for a thorough empirical understanding of how the the theoretical fairness notion translates into perceptions of fairness in practical scenarios. Drawing upon the explainable artificial intelligence literature and elaboration likelihood model, we investigate the interaction effects of explanation specificity of AIs and issue involvement of users. We used a 3x2 experiment design with 456 participants to verify the proposed research model. We found that for individuals of low issue involvement, AI with global explanation is more effective, while AI feature-based explanation is more effective in influencing high issues involved individuals on their fairness perceptions of AI decisions, consequently leading to their trusting intentions towards AI decision-making systems. This study significantly contributes to the theoretical landscape of AI fairness and human-AI interaction, and provide important practical contributions for AI designers.

Share

COinS