Abstract

The rapid emergence of generative AI has driven higher education institutions to adopt varied governance frameworks, yet their effects on student learning remain unclear. This PRISMA-based review of 73 studies (223–225) examines how institutional AI policies shape two outcomes: cognitive effort and academic confidence. Drawing on Cognitive Load Theory, Bloom’s Taxonomy, and self-efficacy theory, we identify three policy types: restrictive, permissive, and guided. Findings show restrictive policies limit digital literacy, permissive policies foster over-reliance, and guided frameworks support higher-order engagement and calibrated confidence. A conceptual framework is proposed that models policy-to-outcome pathways, specifies six testable hypotheses, and highlights demographic and literacy moderators. The study contributes by operationalising cognitive effort and confidence as constructs for empirical testing, extending IS debates on AI governance, and offering evidence-based recommendations for AI-resilient assessment and institutional policy design.

Share

COinS