Abstract

This preliminary study investigates how explainability features in Generative AI, particularly intelligibility and stability, influence causability, the user's ability to understand and reconstruct AI reasoning. Additionally, it examines how perceptions of GenAI humanness moderate the relationship between causability and distinct types of trust (human-like and system-like). Drawing on Social Cognitive Theory, the research further explores how these trust dimensions impact user self-efficacy, defined as confidence in interacting effectively with GenAI systems. Employing Partial Least Squares Structural Equation Modeling (PLS-SEM) on data from professionals experienced with GenAI tools, this study aims to provide empirical insights into how clear and interpretable explanations enhance user confidence and engagement. By addressing the opacity and complexity characteristic of deep learning-based AI systems, the research contributes to developing responsible and transparent AI solutions, enhancing interpretability, trust, and effective user interaction.

Share

COinS