Paper Number



Owing to high functional complexity, trust plays a critical role for the adoption of intelligent decision support systems (DSS). Especially failures in initial usage phases might endanger trust since users are yet to assess the system’s capabilities over time. Since such initial failures are unavoidable, it is crucial to understand how providers can inform users about system capabilities to rebuild user trust. Using an online experiment, we evaluate the effects of recurring explanations and initial tutorials as transparency measures on trust. We find that recurring explanations are superior to initial tutorials in establishing trust in intelligent DSS. However, recurring explanations are only as effective as tutorials or the combination of both tutorials and recurring explanations in rebuilding trust after initial failures occurred. Our results provide empirical insights for the design of transparency mechanisms for intelligent DSS, especially those with high underlying algorithmic complexity or potentially high damage.



When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.