As Information Systems courses have become both more data-focused and student numbers have increased, there has emerged a greater need to assess technical and analytical skills more efficiently and effectively. Multiple-choice examinations provide a means for accomplishing this, though creating effective multiple-choice assessment items within a technical course context can be challenging. This study presents an iterative quality improvement framework based on Plan-Do-Study-Act (PDSA) quality assurance cycle for developing and improving such multiple-choice assessments. Integral to this framework, we also present a rigorous, reliable, and valid measure of assessment and item quality using discrimination efficiency and the KR-20 assessment reliability measure. We demonstrate the effectiveness of our approach across exams developed and administered for two courses — one, a highly technical Information Systems introductory course and the other, an introductory data analytics course. Using this approach, we show that assessment quality iteratively improves when instructors measure items and exams rigorously and apply this PDSA framework.
Ugray, Zsolt and Dunn, Brian K.
"Quality Assurance of Learning Assessments in Large Information Systems and Decision Analysis Courses,"
Journal of Information Systems Education: Vol. 33
Available at: https://aisel.aisnet.org/jise/vol33/iss4/7
When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.