Abstract

Forecast errors can lead to strategic missteps, financial losses, and reduced trust. Forecast accuracy is therefore of critical importance. Advances in artificial intelligence (AI) have challenged long-standing assumptions about human superiority over algorithms under certain conditions. AI models capture contextual cues and complex relationships that were previously beyond the reach of statistical models. Recent research, however, provides mixed evidence on whether humans, AI, or human-AI collaboration (HAI) achieve the highest forecast accuracy. To address these inconsistencies, we conduct a meta analysis of studies comparing forecast accuracy across humans, AI, and HAI. Grounded in the Brunswik Lens Model and prior research in forecasting and information systems, we systematically investigate moderators that shape relative forecasting performance. We expect to contribute to theory by synthesizing fragmented evidence and situating moderator effects within the Brunswik Lens Model. Furthermore, we expect to provide practical guidance on when to rely on humans, AI, or HAI.

Share

COinS