Paper Number

ECIS2025-1417

Paper Type

CRP

Abstract

Trust calibration is a central component for adopting new information systems (IS) technologies, especially for AI-assisted decision-making systems. While trust is defined as an attitude with dynamic processes that evolve throughout the interaction, current research lacks a comprehensive understanding of how to measure these dynamic changes. This study seeks to evaluate the sensitivity of three common trust measurement methods- single-item scales, questionnaires, and trust games – to capture changes in trust over time. In an online experiment, participants (N = 228) interacted with a simulated AI-system for stock-market investments. The results suggest that only questionnaires are sensitive to trust changes and enable the measurement of dynamic trust, while trust games allow the measurement of dynamic reliance processes. This study contributes to developing more sensitive methods to better understand the calibration of trust and reliance in Human-AI collaboration, with broader implications for the design and evaluation of IS.

Author Connect URL

https://authorconnect.aisnet.org/conferences/ECIS2025/papers/ECIS2025-1417

Share

COinS
 
Jun 18th, 12:00 AM

Measuring Trust Dynamics in AI-Assisted Decision-Making: Insights from an Experimental Study

Trust calibration is a central component for adopting new information systems (IS) technologies, especially for AI-assisted decision-making systems. While trust is defined as an attitude with dynamic processes that evolve throughout the interaction, current research lacks a comprehensive understanding of how to measure these dynamic changes. This study seeks to evaluate the sensitivity of three common trust measurement methods- single-item scales, questionnaires, and trust games – to capture changes in trust over time. In an online experiment, participants (N = 228) interacted with a simulated AI-system for stock-market investments. The results suggest that only questionnaires are sensitive to trust changes and enable the measurement of dynamic trust, while trust games allow the measurement of dynamic reliance processes. This study contributes to developing more sensitive methods to better understand the calibration of trust and reliance in Human-AI collaboration, with broader implications for the design and evaluation of IS.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.