Journal Impact Factor: Metric or Mirage

The journal impact factor (JIF), or simply ‘impact factor’, was introduced in 1975 as a metric to rank consumers’ levels of interest in scientific journals. An early application was to assist librarians in deciding which subscriptions they should purchase for their institutions.

Therefore, it was designed as a market research tool and was never intended to be used as a measure of the quality of journals nor of the science which they contained. After passing through several hands, the Journal of Citation Reports (JCR) – the annual listing of journal impact factors, was acquired by Clarivate which continues to ‘calculate’ and publish this ‘metric.’ We have put ‘calculate’ and ‘metric’ in quotation marks, because their meanings, when applied to journal impact factor, may be a little unconventional. You see, journal impact factor was initially calculated using a very straightforward formula:

Impact factor= citations in yearn/(publications in yearn-1 + publications in yearn-2)

In other words, it is (or was) the number of citations that the average paper would garner in the 2 years following its publication – but the wheels came off! Early on, publishers realized the monetary value of having a high impact factor and so they would ‘appeal’ their rankings, sometimes successfully, if they didn’t like them. Also, it appears that some journals underwent a mysterious jump in impact factor when they were acquired by the publisher of the JCR. Finally, and as you might have predicted, people or publishers (like people, but without souls) began to ‘game’ the system. They would pick and choose how to report their citation numbers in order to boost their rankings.1 I even remember receiving a post-card from a publisher, back in the day when there were post-cards, encouraging me to cite my own papers in their journal as a way of boosting their impact factor. And this was done unashamedly by many publishers.

So, many might say that journal impact factor is no longer a metric, because it is negotiated, not calculated, and we are no longer sure what it is intended to reflect. Potential refinements have been suggested to the calculation of impact factor, but even if these resulted in metrics with more validity, the problem of incorrect application would likely remain. One thing remains certain, however: journal impact factor never was and never will be a metric of journal quality or research quality. In fact, a recent study has argued that, depending upon the circumstances, journal impact factor is about as useful as flipping a coin in judging the relative merits of two research papers.2 It is therefore not surprising that the use of journal impact factor to evaluate research or researchers, to allocate funding and to guide hiring/promotion practices has been roundly criticized as not just misguided, but even destructive of good research.3

References

  1. Liu XL, Gai SS, Zhou J. Journal impact factor: do the numerator and denominator need correction? PLoS One 2016;11(3):e0151414.
  2. Brito R, Rodriguez-Navarro A. Evaluating research and researchers by the journal impact factor: is it better than coin flipping? Journal of Informetrics 2019;13:314-324.
  3. aulus FM, Cruz N, Krach S. The impact factor fallacy. Front Psychol 2018:1487.

Leave a Comment

Your email address will not be published. Required fields are marked *