PACIS 2022 Proceedings

Paper Number

1422

Abstract

As organizations deploy artificial intelligence (AI) to improve decision making, they may encounter inaccurate algorithmic predictions. These predictions can result from biased training data and variations in the design and application context of algorithms and can have subsequent negative consequences for organizations. While technical approaches are being developed for reducing bias and improving algorithmic predictions, the question of users' responses to algorithmic predictions (e.g., acceptance) is less studied in IS research. Focusing on characteristics of AI, users, and tasks this paper proposes a model to explain how AI interpretability, domain expertise, and task complexity influence users’ behavioral responses toward algorithmic predictions in agricultural settings. The model will be tested with an experimental study, the results of which will provide new insights on the social side of algorithmic systems and help to inform the design of robust AI-enabled systems.

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.