•  
  •  
 

Journal of Information Systems Education

Abstract

This study examines how generative artificial intelligence can augment human judgment in assurance of learning assessments within business education, using the task-technology fit framework as a guiding lens. A case study in a college of business – where the Management Information Systems program served as a central unit in the assurance of learning cycle – compared generative artificial intelligence-driven evaluations of student writing with traditional faculty assessments. The results demonstrate that when mediated by human-based prompt engineering and moderated by human oversight, generative artificial intelligence markedly improves assessment efficiency and scoring consistency while providing more in-depth feedback without compromising evaluation accuracy. These findings indicate that generative artificial intelligence is most effective as a complement to rather than a replacement for human evaluators. The study extends task-technology fit theory to generative artificial intelligence-driven educational assessment and introduces a human-integrated, generative artificial intelligence-augmented theoretical model for assurance of learning assessments. In this model, human expertise acts as an iterative mediator (via prompt engineering) to strengthen task-technology alignment, while human oversight serves as a moderator ensuring contextual fidelity and output quality. Beyond its theoretical contribution, the study highlights practical implications for information systems educators and curriculum designers.

DOI

https://doi.org/10.62273/STSP3767

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.