Paper Type
ERF
Abstract
The lack of explainability in AI-driven autonomous vehicles (AVs) remains a key barrier to user trust and adoption. Current AV systems provide minimal transparency, leading to algorithm aversion and safety concerns. This study aims to examine how AI explainability—focusing on benevolence (user-centered decision-making) and competence (technical proficiency)—influences affective and cognitive trust, shaping perceived safety and adoption intention. Through an online experiment and a lab-based study, we plan to assess the impact of AI transparency under varying driving conditions and cognitive load.
Paper Number
1823
Recommended Citation
Yang, Junyi; Liu, Si; Xie, Zhecheng; and Lu, Xuecong, "Explaining AI in Autonomous Vehicles: A Path to Trust and Adoption" (2025). AMCIS 2025 Proceedings. 4.
https://aisel.aisnet.org/amcis2025/urbanmob/urbanmob/4
Explaining AI in Autonomous Vehicles: A Path to Trust and Adoption
The lack of explainability in AI-driven autonomous vehicles (AVs) remains a key barrier to user trust and adoption. Current AV systems provide minimal transparency, leading to algorithm aversion and safety concerns. This study aims to examine how AI explainability—focusing on benevolence (user-centered decision-making) and competence (technical proficiency)—influences affective and cognitive trust, shaping perceived safety and adoption intention. Through an online experiment and a lab-based study, we plan to assess the impact of AI transparency under varying driving conditions and cognitive load.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
UrbanMob