Location

Hilton Hawaiian Village, Honolulu, Hawaii

Event Website

https://hicss.hawaii.edu/

Start Date

3-1-2024 12:00 AM

End Date

6-1-2024 12:00 AM

Description

Artificial Intelligence (AI) has the potential to augment human decision making in an astonishing variety of domains. However, its opaque nature is a barrier to appropriate reliance on AI-based decision support. One possible solution stems from the research field of Explainable AI (XAI), creating automatically-generated explanations to make the inner functioning of AI understandable to humans. Our research on XAI focuses on understanding the impact of explanations alongside confidence scores on appropriate reliance on AI-based decision support systems. To this end, we conducted a randomized, between-subjects online experiment with 126 participants performing an image classification task. We find that while XAI-based explanations alongside confidence scores improve AI users’ relative positive self-reliance, they simultaneously reduce users’ relative positive AI-reliance. Thus, explanations alongside confidence scores can help reduce AI-overreliance but run the risk of causing AI-underreliance. Our findings help advance the understanding of explanations as facilitators of appropriate reliance on AI systems.

Share

COinS
 
Jan 3rd, 12:00 AM Jan 6th, 12:00 AM

Follow Me, Everything Is Alright (or Not): The Impact of Explanations on Appropriate Reliance on Artificial Intelligence

Hilton Hawaiian Village, Honolulu, Hawaii

Artificial Intelligence (AI) has the potential to augment human decision making in an astonishing variety of domains. However, its opaque nature is a barrier to appropriate reliance on AI-based decision support. One possible solution stems from the research field of Explainable AI (XAI), creating automatically-generated explanations to make the inner functioning of AI understandable to humans. Our research on XAI focuses on understanding the impact of explanations alongside confidence scores on appropriate reliance on AI-based decision support systems. To this end, we conducted a randomized, between-subjects online experiment with 126 participants performing an image classification task. We find that while XAI-based explanations alongside confidence scores improve AI users’ relative positive self-reliance, they simultaneously reduce users’ relative positive AI-reliance. Thus, explanations alongside confidence scores can help reduce AI-overreliance but run the risk of causing AI-underreliance. Our findings help advance the understanding of explanations as facilitators of appropriate reliance on AI systems.

https://aisel.aisnet.org/hicss-57/da/xai/2