Paper Number
ECIS2025-1614
Paper Type
CRP
Abstract
Despite the remarkable capabilities of large language models (LLMs), their black-box nature often raises concerns about trustworthiness, particularly when users rely on them for data analysis. While providing insights into an LLM’s internal reasoning process through explanations could be a promising approach to addressing this issue, little is known about how LLM explanations impact users. Our study addresses this gap by investigating the impact of explanation provision strategies in LLM-based data assistants. Drawing on cognitive load theory, we conducted a between-subject online experiment (N=96) to examine how different explanation provision strategies (automatic vs. user-invoked) influence users’ extraneous cognitive load, trust, and task performance. Our results suggest that user-invoked explanations reduce extraneous cognitive load, which in turn positively influences trust and performance in data analysis tasks. We contribute to the nascent literature on LLM explainability by offering novel insights into the impact of explanation provision strategies in interactions with LLM-based assistants.
Recommended Citation
Sîrbu, Ana-Maria; Schelhorn, Till Carlo; and Gnewuch, Ulrich, "Explanation Provision Strategies in LLM-based Data Assistants: Impact on Extraneous Cognitive Load, Trust, and Task Performance" (2025). ECIS 2025 Proceedings. 6.
https://aisel.aisnet.org/ecis2025/hci/hci/6
Explanation Provision Strategies in LLM-based Data Assistants: Impact on Extraneous Cognitive Load, Trust, and Task Performance
Despite the remarkable capabilities of large language models (LLMs), their black-box nature often raises concerns about trustworthiness, particularly when users rely on them for data analysis. While providing insights into an LLM’s internal reasoning process through explanations could be a promising approach to addressing this issue, little is known about how LLM explanations impact users. Our study addresses this gap by investigating the impact of explanation provision strategies in LLM-based data assistants. Drawing on cognitive load theory, we conducted a between-subject online experiment (N=96) to examine how different explanation provision strategies (automatic vs. user-invoked) influence users’ extraneous cognitive load, trust, and task performance. Our results suggest that user-invoked explanations reduce extraneous cognitive load, which in turn positively influences trust and performance in data analysis tasks. We contribute to the nascent literature on LLM explainability by offering novel insights into the impact of explanation provision strategies in interactions with LLM-based assistants.
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.