Paper Type

ERF

Abstract

Use of generative artificial intelligence is growing exponentially in a variety of application areas. Within higher education, faculty use AI to augment learning experiences. However, differences in the prompts given to the AI can substantially influence outcomes. The process of crafting prompts to achieve optimized outcomes is called prompt engineering. This paper proposes a study in which the prompts used to train an AI which provides feedback on drafts of MBA case studies are manipulated, and differences in feedback effectiveness are measured. Outcomes include clarity of instructions, accuracy, prioritization, supportiveness, and congruence between feedback and assignment criteria.

Paper Number

1503

Author Connect URL

https://authorconnect.aisnet.org/conferences/AMCIS2025/papers/1503

Comments

SIGED

Author Connect Link

Share

COinS
 
Aug 15th, 12:00 AM

Prompt Engineering for Case Analysis Feedback Using Generative Artificial Intelligence

Use of generative artificial intelligence is growing exponentially in a variety of application areas. Within higher education, faculty use AI to augment learning experiences. However, differences in the prompts given to the AI can substantially influence outcomes. The process of crafting prompts to achieve optimized outcomes is called prompt engineering. This paper proposes a study in which the prompts used to train an AI which provides feedback on drafts of MBA case studies are manipulated, and differences in feedback effectiveness are measured. Outcomes include clarity of instructions, accuracy, prioritization, supportiveness, and congruence between feedback and assignment criteria.

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.