Abstract

In high-stakes domains like healthcare and business, AI-driven decision support systems play a critical role. However, their effectiveness depends on users' ability to navigate the uncertainties inherent in AI predictions. This study explores the cognitive challenges that decision-makers encounter when dealing with two types of uncertainty: epistemic (related to unknowns in the model) and aleatoric (stemming from inherent randomness). By investigating how individuals with different levels of machine learning expertise perceive and manage these uncertainties, the research aims to address a crucial gap in the understanding of human-AI interaction. The insights gained will guide the creation of AI tools that are not only more reliable but also more aligned with the cognitive needs of users, ultimately leading to better decision-making outcomes.

Share

COinS