PACIS 2022 Proceedings

Paper Number

1458

Abstract

Improving the accuracy of algorithmic prediction has gained attention in Information Systems research in recent decades. Information systems which include algorithmic prediction have been seen to provide organisational value. However, as decisions based on these opaque algorithms become more ubiquitous, public demand for explanations for its output have naturally increased. This review evaluates research that examines the impact of providing explanations for the predictions made by algorithms, on how users respond to the algorithmic decision-making systems. A total of 42 articles identifies four primary themes in contributions of explainable systems in advancing research on algorithmic decision-making: (1) user’s trust in the system, (2) user’s understanding (3) user’s acceptance of the prediction and/or the system and (4) user’s perception of usefulness of the system. Findings from the analysis inform gaps in literature, future research direction and the need for interdisciplinary collaboration to create responsible explainable algorithmic decision-making systems.

Comments

Paper Number 1458

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.