Abstract

Generative AI (GenAI) assistant applications purport to increase employee productivity and decision performance in complex tasks. However, the opaque nature of the underlying transformer architecture often leads to inefficient task flows. These inefficiencies force professionals into long cycles of response evaluation and prompt revision, potentially increasing cognitive load and leading to subpar performance and utility from GenAI use. In this study, we explore how AI transparency and explainability impact perceived understanding of and user reliance on AI and on cognitive load in organizational decision-making. In a within-subjects experiment, participants performed demand forecasting in a simulation using GenAI assistants that varied in transparency and explainability levels. This study’s findings and method of configuring GenAI assistants extend conceptual understanding of transparency and explainability and provide practical insights for designing effective GenAI assistants.

Share

COinS