Paper Number
ICIS2025-1479
Paper Type
Complete
Abstract
Trust is essential for the successful adoption and effective use of AI systems. While existing research has primarily examined trust as a general construct at the system level, this paper empirically explores the concept of feature-specific trust, highlighting how users calibrate trust toward distinct AI features. We investigate this phenomenon in the context of physical AI systems, partially automated vehicles. Using a rigorous multi-method empirical approach that combines quantitative measures of trust over multiple time points with qualitative think-aloud protocols and interviews, we demonstrate that trust varies significantly between these AI features and evolves over time with increased user experience. Our findings underscore the importance of distinguishing between trust in individual AI features and emphasize the temporal dynamics of trust calibration. Our study contributes to trust calibration literature by conceptualizing feature-specific trust in physical AI systems and provides insights into designing physical AI systems that better align with user expectations.
Recommended Citation
Stocker, Alexander; Richter, Alexander; and Ebinger, Nikolai, "Feature-Specific Trust Calibration in Physical AI Systems" (2025). ICIS 2025 Proceedings. 14.
https://aisel.aisnet.org/icis2025/user_behav/user_behav/14
Feature-Specific Trust Calibration in Physical AI Systems
Trust is essential for the successful adoption and effective use of AI systems. While existing research has primarily examined trust as a general construct at the system level, this paper empirically explores the concept of feature-specific trust, highlighting how users calibrate trust toward distinct AI features. We investigate this phenomenon in the context of physical AI systems, partially automated vehicles. Using a rigorous multi-method empirical approach that combines quantitative measures of trust over multiple time points with qualitative think-aloud protocols and interviews, we demonstrate that trust varies significantly between these AI features and evolves over time with increased user experience. Our findings underscore the importance of distinguishing between trust in individual AI features and emphasize the temporal dynamics of trust calibration. Our study contributes to trust calibration literature by conceptualizing feature-specific trust in physical AI systems and provides insights into designing physical AI systems that better align with user expectations.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.
Comments
16-UserBehavior