Abstract
The widespread adoption of Large Language Model (LLM)-based agents has showcased their remarkable capabilities, but their opaque nature complicates users' understanding of how they arrive at an answer. Chain-of-Thought (CoT) prompting has been proposed to address this issue. We investigate how CoT display impacts user effectiveness in data analysis tasks, drawing on the Theory of Effective Use. We conducted an online experiment with 84 participants using a prototype LLM-based analytics assistant. Our results show that CoT display significantly increases users' ability to achieve representational fidelity and suggest an indirect positive effect on task effectiveness. This research contributes to IS literature by providing empirical insights into how CoT explanations affect user interaction with LLM-based assistants and offers practical implications for designing explainable LLM-based systems.
Recommended Citation
Schelhorn, Till; Gnewuch, Ulrich; and Maedche, Alexander, "The Impact of Chain-of-Thought Display on the Effective Use of LLM-based Analytics Agents" (2025). SIGHCI 2024 Proceedings. 15.
https://aisel.aisnet.org/sighci2024/15