Paper Number

ICIS2025-1187

Paper Type

Complete

Abstract

Self-Regulated Learning (SRL) requires learners to actively manage cognition, motivation, and behavior in pursuit of goals, yet sustaining SRL in digital environments remains challenging. Reflection Prompts (RPs) are scaffolds that pose guiding questions or statements to elicit deliberate and critical analysis of one’s learning process to enhance SRL. However, their optimal design remains debated. Grounded in Construal Level Theory (CLT), this research optimizes RPs by aligning specificity (specific vs. general) and delivery timing (immediate vs. delayed). Study 1 demonstrates that immediate-specific RPs bolster content mastery through task-focused reflection, while delayed-general RPs improve academic performance via strategic evaluation. Study 2 reveals that combining both creates complementary gains in SRL behaviors. However, answer-retrieval Large Language Models (LLMs), despite improving efficiency, undermine this complementarity. These findings reconcile debates on RP efficacy by establishing CLT as a guiding design principle and highlight the imperative for AI-empowered education that augments, rather than replace, learner agency.

Comments

24-Learning

Share

COinS
 
Dec 14th, 12:00 AM

A Construal-Level Approach to Optimizing Reflection Prompts for Self-Regulated Learning

Self-Regulated Learning (SRL) requires learners to actively manage cognition, motivation, and behavior in pursuit of goals, yet sustaining SRL in digital environments remains challenging. Reflection Prompts (RPs) are scaffolds that pose guiding questions or statements to elicit deliberate and critical analysis of one’s learning process to enhance SRL. However, their optimal design remains debated. Grounded in Construal Level Theory (CLT), this research optimizes RPs by aligning specificity (specific vs. general) and delivery timing (immediate vs. delayed). Study 1 demonstrates that immediate-specific RPs bolster content mastery through task-focused reflection, while delayed-general RPs improve academic performance via strategic evaluation. Study 2 reveals that combining both creates complementary gains in SRL behaviors. However, answer-retrieval Large Language Models (LLMs), despite improving efficiency, undermine this complementarity. These findings reconcile debates on RP efficacy by establishing CLT as a guiding design principle and highlight the imperative for AI-empowered education that augments, rather than replace, learner agency.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.