Paper Type

ERF

Abstract

The lack of explainability in AI-driven autonomous vehicles (AVs) remains a key barrier to user trust and adoption. Current AV systems provide minimal transparency, leading to algorithm aversion and safety concerns. This study aims to examine how AI explainability—focusing on benevolence (user-centered decision-making) and competence (technical proficiency)—influences affective and cognitive trust, shaping perceived safety and adoption intention. Through an online experiment and a lab-based study, we plan to assess the impact of AI transparency under varying driving conditions and cognitive load.

Paper Number

1823

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1823

Comments

UrbanMob

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Explaining AI in Autonomous Vehicles: A Path to Trust and Adoption

The lack of explainability in AI-driven autonomous vehicles (AVs) remains a key barrier to user trust and adoption. Current AV systems provide minimal transparency, leading to algorithm aversion and safety concerns. This study aims to examine how AI explainability—focusing on benevolence (user-centered decision-making) and competence (technical proficiency)—influences affective and cognitive trust, shaping perceived safety and adoption intention. Through an online experiment and a lab-based study, we plan to assess the impact of AI transparency under varying driving conditions and cognitive load.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.