Start Date

11-8-2016

Description

This study examines the effectiveness of the method of using publication in ranked journals to evaluate the quality of scholarly output in the Information Systems field. Counting publications in ranked journals is the traditional method employed to evaluate scholarly output. Counting publications has been criticized for its lack of theoretical basis and performative effects but it has never been empirically studied to determine its effectiveness in correctly classifying scholarly output as to its quality. This study fills that gap by testing a set of four published journal lists to examine their ability to discern the quality of papers. We find that the journal lists substantially misclassify articles as to quality and are therefore problematic as evaluative mechanisms for scholarly ability. This study argues that other methods such as evaluation of a scholar’s capital (Cuellar, Takeda, Vidgen, & Truex III, 2016) should be pursued.

Share

COinS
 
Aug 11th, 12:00 AM

Can We Trust Journal Rankings to Assess Article Quality?

This study examines the effectiveness of the method of using publication in ranked journals to evaluate the quality of scholarly output in the Information Systems field. Counting publications in ranked journals is the traditional method employed to evaluate scholarly output. Counting publications has been criticized for its lack of theoretical basis and performative effects but it has never been empirically studied to determine its effectiveness in correctly classifying scholarly output as to its quality. This study fills that gap by testing a set of four published journal lists to examine their ability to discern the quality of papers. We find that the journal lists substantially misclassify articles as to quality and are therefore problematic as evaluative mechanisms for scholarly ability. This study argues that other methods such as evaluation of a scholar’s capital (Cuellar, Takeda, Vidgen, & Truex III, 2016) should be pursued.