Abstract

Explainable Artificial Intelligence (XAI) applications are widely used in interactions with end users. However, there remains a lack of understanding of how the different characteristics of these systems, particularly the anthropomorphic design and the type of explanations provided interact to affect user reactions to AI. We address this research gap by building on social response theory (SRT), prior XAI and anthropomorphic design literature, to investigate how anthropomorphic design (human-like vs. machine-like) and types of explanations (consensual, expert, internal, empirical validation-based explanations) affect user reactions to AI (perceived trust and persuasiveness) and acceptance of AI systems. We will evaluate the proposed research model by conducting a 2 × 4 between-subjects experiment. This study will enrich the theoretical landscape of anthropomorphic design and human-AI interaction (HAII), offering actionable insights into user perception and acceptance for XAI practitioners.

Share

COinS