•  
  •  
 
Communications of the Association for Information Systems

Abstract

For the information systems discipline, it is important to have means for assessing the performance exhibited by individual faculty members, groups of researchers, and the journals that publish their work. Such assessments affect the outcomes of university decisions about these individuals, groups, and journals. Various kinds of data can be used in the processes that lead to the decisions about performance. In this paper we consider one type of data that seems to be increasingly adopted, either explicitly or implicitly, as an indicator of performance: the journal impact factor (JIF), which is periodically reported in the Journal Citation Reports (JCR). The allure of JIFs for rating performance is that they come from a third party source (Thomson Reuters), are systematically determined in a largely transparent fashion, and yield a single number for each journal that is covered in the JCR. However, behind this allure several issues give us pause when it comes to interpreting or applying JIFs in the context of deciding on performance ratings. It appears that these issues are rarely understood or pondered by those in the information systems world who adopt JIFs for such decisions – at least not in an overt way. We examine these issues to understand the advisability of employing JIFs to produce performance ratings, the underlying assumptions, and the consequences. We conclude that use of JIFs in university decision making should be undertaken only with great caution, alternative decision inputs should be considered, and that judging the impact of a specific article by the journal in which it appears is questionable.

DOI

10.17705/1CAIS.02502

Share

COinS
 

When commenting on articles, please be friendly, welcoming, respectful and abide by the AIS eLibrary Discussion Thread Code of Conduct posted here.