Location

Hilton Hawaiian Village, Honolulu, Hawaii

Event Website

https://hicss.hawaii.edu/

Start Date

3-1-2024 12:00 AM

End Date

6-1-2024 12:00 AM

Description

Business decisions involving investments, healthcare, and supply chains are often made in uncertain environments. At the same time, despite being optimal initially, such choices may seem incorrect in hindsight, which may explain why decision-makers hesitate to use AI algorithms under high uncertainty. While some studies suggest that making AI and ML applications more understandable can boost their adoption and trust, this hasn’t been examined in uncertain conditions where decision-makers must make repetitive business decisions. Our study addresses this issue empirically by analyzing how different interpretability approaches affect AI adoption and trust under varying levels of uncertainty. Surprisingly, we find that providing interpretability does not necessarily increase AI adoption. In some cases, it may even reduce AI adoption. Interestingly, even though AI adoption was higher, trust in the AI recommendations was significantly lower in high uncertainty compared to low uncertainty across all interpretability types. The evidence is clear that showing the cumulative monetary performance of AI to the users as a benchmark, side by side with their own monetary performance, enhances trust in the AI recommendations.

Share

COinS
 
Jan 3rd, 12:00 AM Jan 6th, 12:00 AM

The Effect of Interpretable Artificial Intelligence on Repeated Managerial Decision-Making under Uncertainty

Hilton Hawaiian Village, Honolulu, Hawaii

Business decisions involving investments, healthcare, and supply chains are often made in uncertain environments. At the same time, despite being optimal initially, such choices may seem incorrect in hindsight, which may explain why decision-makers hesitate to use AI algorithms under high uncertainty. While some studies suggest that making AI and ML applications more understandable can boost their adoption and trust, this hasn’t been examined in uncertain conditions where decision-makers must make repetitive business decisions. Our study addresses this issue empirically by analyzing how different interpretability approaches affect AI adoption and trust under varying levels of uncertainty. Surprisingly, we find that providing interpretability does not necessarily increase AI adoption. In some cases, it may even reduce AI adoption. Interestingly, even though AI adoption was higher, trust in the AI recommendations was significantly lower in high uncertainty compared to low uncertainty across all interpretability types. The evidence is clear that showing the cumulative monetary performance of AI to the users as a benchmark, side by side with their own monetary performance, enhances trust in the AI recommendations.

https://aisel.aisnet.org/hicss-57/os/digitization/2