Paper Number

ICIS2025-1479

Paper Type

Complete

Abstract

Trust is essential for the successful adoption and effective use of AI systems. While existing research has primarily examined trust as a general construct at the system level, this paper empirically explores the concept of feature-specific trust, highlighting how users calibrate trust toward distinct AI features. We investigate this phenomenon in the context of physical AI systems, partially automated vehicles. Using a rigorous multi-method empirical approach that combines quantitative measures of trust over multiple time points with qualitative think-aloud protocols and interviews, we demonstrate that trust varies significantly between these AI features and evolves over time with increased user experience. Our findings underscore the importance of distinguishing between trust in individual AI features and emphasize the temporal dynamics of trust calibration. Our study contributes to trust calibration literature by conceptualizing feature-specific trust in physical AI systems and provides insights into designing physical AI systems that better align with user expectations.

Comments

16-UserBehavior

Share

COinS
 
Dec 14th, 12:00 AM

Feature-Specific Trust Calibration in Physical AI Systems

Trust is essential for the successful adoption and effective use of AI systems. While existing research has primarily examined trust as a general construct at the system level, this paper empirically explores the concept of feature-specific trust, highlighting how users calibrate trust toward distinct AI features. We investigate this phenomenon in the context of physical AI systems, partially automated vehicles. Using a rigorous multi-method empirical approach that combines quantitative measures of trust over multiple time points with qualitative think-aloud protocols and interviews, we demonstrate that trust varies significantly between these AI features and evolves over time with increased user experience. Our findings underscore the importance of distinguishing between trust in individual AI features and emphasize the temporal dynamics of trust calibration. Our study contributes to trust calibration literature by conceptualizing feature-specific trust in physical AI systems and provides insights into designing physical AI systems that better align with user expectations.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.