Location
Hilton Hawaiian Village, Honolulu, Hawaii
Event Website
https://hicss.hawaii.edu/
Start Date
3-1-2024 12:00 AM
End Date
6-1-2024 12:00 AM
Description
Business decisions involving investments, healthcare, and supply chains are often made in uncertain environments. At the same time, despite being optimal initially, such choices may seem incorrect in hindsight, which may explain why decision-makers hesitate to use AI algorithms under high uncertainty. While some studies suggest that making AI and ML applications more understandable can boost their adoption and trust, this hasn’t been examined in uncertain conditions where decision-makers must make repetitive business decisions. Our study addresses this issue empirically by analyzing how different interpretability approaches affect AI adoption and trust under varying levels of uncertainty. Surprisingly, we find that providing interpretability does not necessarily increase AI adoption. In some cases, it may even reduce AI adoption. Interestingly, even though AI adoption was higher, trust in the AI recommendations was significantly lower in high uncertainty compared to low uncertainty across all interpretability types. The evidence is clear that showing the cumulative monetary performance of AI to the users as a benchmark, side by side with their own monetary performance, enhances trust in the AI recommendations.
Recommended Citation
Altintas, Onur; Seidmann, Abraham; Gu, Bin; and Mažar, Nina, "The Effect of Interpretable Artificial Intelligence on Repeated Managerial Decision-Making under Uncertainty" (2024). Hawaii International Conference on System Sciences 2024 (HICSS-57). 2.
https://aisel.aisnet.org/hicss-57/os/digitization/2
The Effect of Interpretable Artificial Intelligence on Repeated Managerial Decision-Making under Uncertainty
Hilton Hawaiian Village, Honolulu, Hawaii
Business decisions involving investments, healthcare, and supply chains are often made in uncertain environments. At the same time, despite being optimal initially, such choices may seem incorrect in hindsight, which may explain why decision-makers hesitate to use AI algorithms under high uncertainty. While some studies suggest that making AI and ML applications more understandable can boost their adoption and trust, this hasn’t been examined in uncertain conditions where decision-makers must make repetitive business decisions. Our study addresses this issue empirically by analyzing how different interpretability approaches affect AI adoption and trust under varying levels of uncertainty. Surprisingly, we find that providing interpretability does not necessarily increase AI adoption. In some cases, it may even reduce AI adoption. Interestingly, even though AI adoption was higher, trust in the AI recommendations was significantly lower in high uncertainty compared to low uncertainty across all interpretability types. The evidence is clear that showing the cumulative monetary performance of AI to the users as a benchmark, side by side with their own monetary performance, enhances trust in the AI recommendations.
https://aisel.aisnet.org/hicss-57/os/digitization/2