Bibliometrics, Scientometrics and Research Valuation

How do you know what constitutes ‘good science?’ It’s tough and yet that is the task that faces many researchers and institutions. Funders need to make those judgements in order to decide who does and doesn’t get support for their work. Institutions have an obligation to hire and promote on the basis of how good someone’s research is.

And, of course, individual researchers make the call when acting as peer reviewers for journals, but also in making important decisions about their own careers. This is high-risk stuff. Poorly informed decisions harm individuals, institutions and society as a whole.

It is not surprising, therefore, that some serious attempts have been made to establish standards by which valid and reliable judgements can be made. This is going to be a long journey, and we have a long way to go yet, but as is often said about such complex matters, we cannot ‘let the perfect become the enemy of the good.’ We need to work with what we have as we work towards something better.

In this blog, we hope to promote vigorous and scholarly discussion about research, and we hope that much of that discussion will be based on robust reasoning and the right amount of objective information, including metrics. I suspect that a topic we will return to often will be the correct use of metrics, and so I wanted to propose a kind of ontology that we might refer to. I want to make the distinction between bibliometrics, scientometrics and research valuation. These terms are often conflated and, indeed, it would be hard to draw a sharp line between them. It has been pointed out that part of the confusion is that bibliometrics, scientometrics, and the related informetrics often share much the same methods and serve a common intent.1 But let’s see if we can make some general distinctions.

The term bibliometrics has several good and largely congruent definitions in the literature, and these definitions pretty much agree that bibliometrics are concerned with the quantitative assessment of publications. I suppose that a primitive level of bibliometrics would include article word count, but these days in science the term is more often used to refer to measures such as numbers of articles published in a journal or number of citations received by an article. In that sense, we might say that bibliometrics are quite one-dimensional. Bibliometrics do not speak directly to the quality of what is published, although they are sometimes used to infer quality.

Scientometrics are a bit different in that they are often intended to be used in judging the science or the scientist, rather than the publication. Hence, I suppose, a discovery of great scientific merit could appear in a lowly ranked journal and not be widely read, but still be a great discovery – the bibliometrics and the scientometrics would therefore tell different stories. The inverse is also true – some really bad science can appear in highly rated journals and garner lots of citations. A great example of that is Andrew Wakefield’s fraudulent study of the relationship between MMR vaccine and autism, which was published in the Lancet. In any event, the ‘metrics’ of scientometrics tend to be multi-dimensional and may be represented mathematically by formulae incorporating several bibliometric variables. In this sense, scientometrics may be an attempt to ‘triangulate’ the value of research or researchers, rather than investing in a discrete but one-dimensional variable.

Part of the appeal of metrics is that they are often easy (read ‘cheap’) to obtain and they are likely to be relatively free of bias.2 On the other hand, research stakeholders have become adept at gaming the metrics and one is never quite sure whether one is measuring the right thing. The rubber hits the road on this one when we are called upon to decide whether a researcher, a research team or a project are ‘good.’ The decisions, of course, are usually employed to inform future courses of action, and yet the evaluation of science is really only possible in retrospect. What to do?3

One approach to a more wholistic evaluation of science has been the growth in ‘altmetrics.’ While we may question whether the solution to the metrification of the research enterprise is more metrification, the popularity of altmetrics is likely only to grow.4 These measures often involve the quantification of the social media presence of research, an admirable acknowledgement of the importance of the social context of science. Other measures which might also fall under the moniker of ‘altmetrics’ include, for example, the scoring of the research institution against the United Nations’ Sustainable Development Goals (SDGs).

It should be apparent that we are a long way from a consensus on how to define good science, but it must be heartening that the question is in the open and subject to much sensible exploration.

References:

  1. Yang S, Yuan Q. Are scientometrics, informetrics, and bibliometrics different? The 16th International Conference on Scientometrics & Informetrics.
  2. Mryglod O, Kenna R, Holovatch Y, Berche B. Absolute and specific measures of research groups excellence. Scientometrics 2013;95:115-127.
  3. Milne AA. Winnie the Pooh. 1926. Mathuen & Co Ltd. London.
  4. Peters I, Kraker P, Lex E, Gumpenberger C, Gorraiz J. Research data explore: an extended analysis of citations and altmetrics. Scientometrics 2016;107:723-744.

Leave a Comment

Your email address will not be published. Required fields are marked *