Abstract

The “impact factor” computed and published by ISI has become increasingly prominent as a quality measure for evaluating journals and, in turn, the prominence of researchers who publish in them. This paper identifies the origins of the impact factor, as well as its current uses, and numerous problems associated with it. Among these problems are the fact that the conventional impact metric simply examines the number of times an average paper is cited within the first two years after the year of publication – a window that is shorter than the sum of the review cycle time and the publication lead time. As a result, it is simply a matter of chance whether some papers cite a given, published paper within a two-year window. One by-product is that impact factors exhibit highly irregular (i.e., jagged) patterns over time, rather than smooth growth curves. The impact factor is susceptible to “gaming” by journal editors; moreover, it is susceptible to the positive effect of a “blockbuster” paper – which causes a journal’s impact factor to surge upward for a short time and then fall dramatically. We predict various statistical anomalies in journal impact factor data, and we test these predictions with published data.

Share

COinS