Paper Number

ICIS2025-2726

Paper Type

Complete

Abstract

Do AI-generated narrative explanations enhance human oversight or diminish it? Our field experiment with 228 evaluators screening 48 early-stage innovations under three conditions (human-only, black-box AI without explanations, and narrative AI with rationales) reveals a human-AI oversight paradox. While explanations aim to strengthen judgment, they increase reliance on AI recommendations. Across 3,002 screening decisions, screeners with AI were 19% more likely to align with AI recommendations, especially when rejecting ideas. Analysis showed narrative persuasiveness and cross-criteria consistency drove this alignment, suggesting narrative coherence serves as a decision heuristic. Both AI conditions outperformed human-only screening, but narrative AI showed no quality improvements over black-box recommendations despite increased compliance. Concerning evidence suggests narrative explanations may increase rejection of high-potential unconventional solutions. Although algorithmic assistance streamlines high-volume tasks, organizations face the challenge of selective cognitive substitution.

Comments

01-ConferenceTheme

Share

COinS
 
Dec 14th, 12:00 AM

Narrative AI and the Human-AI Oversight Paradox in Evaluating Early-Stage Innovations

Do AI-generated narrative explanations enhance human oversight or diminish it? Our field experiment with 228 evaluators screening 48 early-stage innovations under three conditions (human-only, black-box AI without explanations, and narrative AI with rationales) reveals a human-AI oversight paradox. While explanations aim to strengthen judgment, they increase reliance on AI recommendations. Across 3,002 screening decisions, screeners with AI were 19% more likely to align with AI recommendations, especially when rejecting ideas. Analysis showed narrative persuasiveness and cross-criteria consistency drove this alignment, suggesting narrative coherence serves as a decision heuristic. Both AI conditions outperformed human-only screening, but narrative AI showed no quality improvements over black-box recommendations despite increased compliance. Concerning evidence suggests narrative explanations may increase rejection of high-potential unconventional solutions. Although algorithmic assistance streamlines high-volume tasks, organizations face the challenge of selective cognitive substitution.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.