Few metrics on research funding organizations

We have talked about the overabundance of author-level metrics in research. This is a dynamic and not very pretty field right now, but the science of science is evolving quickly and consensus is developing around what are the better (and not so good) ways to evaluate researchers.

Let’s for a moment, however, turn our attention to the other end of the food chain. How much information is there about the performance of RFOs – research funding organizations? Are they giving bang for buck? As more funding data are appearing online, and as information scientists get better at finding and deciphering program data, it seems likely that there is substantial room for improvement.

Of course, as scientists (and so bruised souls who have all been rejected by research funding organizations) we know that a major problem is inadequate funding, especially for ‘my’ research. No doubt about it, biomedical research is a poor competitor for government support, and is often a victim of political vagaries. On balance, therefore, what we see across developed nations is that fewer researchers are successful in obtaining government funding, and that age at first grant is increasing. For various NIH funding opportunities (rather an ironic name), the percentage hit rate these days seems to be around the low twenties.(1) However, it is not an even playing field. Historically under-represented demographics are scarcely making any gains, and in some instances see a worsening of their representation.(2) Furthermore, there is a kind of social Darwinism at work, whereby those who are able to play the long game are more likely to win.(3)

That means a lot is riding on the decisions of the review groups who determine who gets funded and who doesn’t. Well thank goodness that works well – not! It appears that reviewers of NIH grant applications struggle to achieve agreement. (4) Furthermore, reviewers’ rankings of proposals seem to be relatively poor predictors of future performance. (5) More precisely, level of funding is a poor predictor of future levels of publication. At this point, it is not clear how we can improve peer review. We have hypotheses about how peer review should be conducted, and what knowledge and skills a peer reviewer ought to have. However, it is still the norm for reviewers to be untrained, and, when they are trained, it is not clear that the training ‘takes.’ (6)

A bright spot among all of this is that mentorship programs do increase grant success for junior researchers.(7, 8) Nonetheless, a study of NSERC some years ago concluded that it would be no more costly and would be more efficient to simply give every applicant what they asked for rather than go through the review process. The authors concluded that this would result in an average grant of approximately $30,000 CAD per applicant.(9) Needless to say, this created some debate and even criticism, causing the authors to re-evaluate their data and revise their estimated break-even point upwards to about $40,000 CAD! Equally concerning is an increasing concentration of funding in a relatively few research-intensive universities, and this could have dire consequences for small universities and, of course, colleges.(10)


1.         Brown J. National Institutes of Health Support for Clinical Emergency Care Research, 2015 to 2018. Ann Emerg Med. 2021;77(1).

2.         Swenor BK, Munoz B, Meeks LM. A decade of decline: Grant funding for researchers with disabilities 2008 to 2018. PLoS One. 2020;15(3):e0228686.

3.         Ascoli GA. Biomedical research funding: when the game gets tough, winners start to play. Bioessays. 2007;29(9):933-6.

4.         Pier EL, Brauer M, Filut A, Kaatz A, Raclaw J, Nathan MJ, et al. Low agreement among reviewers evaluating the same NIH grant applications. PNAS. 2018;115(12):2952-7.

5.         Gyorffy B, Herman P, Szabo I. Research funding: past performance is a stronger predictor of future scientific output than reviewer scores. Journal of Informetrics. 2020;14.

6.         Steiner Davis MLE, Conner TR, Miller-Bains K, Shapard L. What makes an effective grants peer reviewer? An exploratory study of the necessary skills. PLoS One. 2020;15(5).

7.         Weber-Main AM, Thomas-Pollei KA, Grabowski J, Steer CJ, Thuras PD, Kushner MG. The Proposal Preparation Program: A Group Mentoring, Faculty Development Model to Facilitate the Submission and Funding of NIH Grant Applications. Acad Med. 2021. Epub ahead of print. DOI: 10.1097/ACM.0000000000004359

8.         Freel SA, Smith PC, Burns EN, Downer JB, Brown AJ, Dewhirst MW. Multidisciplinary Mentoring Programs to Enhance Junior Faculty Research Grant Success. Acad Med. 2017;92(10):1410-5.

9.         Gordon R, Poulin BJ. Cost of the NSERC Science Grant Peer Review System exceeds the cost of giving every qualified researcher a baseline grant. Account Res. 2009;16(1):13-40.

10.       Murray DL, Morris D, Lavoie C, Leavitt PR, MacIsaac H, Masson MEJ, et al. Bias in research grant evaluation has dire consequences for small universities. PLoS One. 2016;11(6).

Leave a Comment

Your email address will not be published. Required fields are marked *