Why is John Ioannidis such a Bad-Ass?

When John Ioannidis’ paper ‘Why most published research findings are false’ appeared in 2005, it stirred up quite a bit of conversation, but really not very much dissent.(1) Using straightforward mathematical analyses and some common-sense arguments, he pointed out several significant problems across the whole biomedical research enterprise, not limited to common errors in hypothesis generation and research design. In the intervening years, some progress has been made in reducing the risk of bias in certain research designs, notably RCTs. However, there is work to be done, with some particular challenges for research in the realm of chiropractic.

An interesting and perhaps counter-intuitive proposition that Dr. Ioannidis advanced was that having too many research teams chasing the same question, particularly with small numbers of subjects/specimens, increased the likelihood of false findings. The argument that he made was that just as testing too many hypotheses increased the likelihood of a false positive finding within a single study, so did having too many teams chasing the same hypothesis. This is even more likely when small effect sizes are reported, regardless of statistical significance. If one then looks at the history, even over the past decade, of studies of spinal manipulation for biomechanical disorders of the spine, this is what one sees – an overabundance of under-powered or barely adequately powered studies of SMT versus… well, fill in the blank – mobilization, exercise, what-have-you. Ioannidis also predicted, as we have seen, alternating positive and negative findings with subsequent studies, confusing issues and dividing the field, rather than shedding light on a topic.

Ioannidis subsequently went on to argue that there is no reason to think that things will get any better any time soon (hard to explain why he always looks happy in his photographs). (2) Much of his argument comes down to sociological and psychological phenomena, including the false valuation of publication numbers and the massive undervaluation of replication research.

Ioannidis has particular concerns about clinical research as it is this domain which has the most immediate effect on real living people.(3) In addition to earlier concerns about design and hypothesis generation, he remarked upon the amount of effort (and money) devoted to the study of relatively trivial disorders or ‘medicalized’ normal conditions such as halitosis (bad breath) and declining sexual function with age. There are echoes here of a criticism often leveled at chiropractors – that they tend to treat the ‘worried well’ rather than the truly ill. This is certainly unfair when we look at the balance of patients seen in community-based clinics. Nonetheless, a glance at the contents page of any chiropractic journal may cause one to ask whether a particular piece of research actually needed to be done – are any patients likely to enjoy a clinically significant benefit from implementation of the findings? And it’s not just chiropractic – Ioannidis concludes that, across the board, it is the minority of clinical trials that address real patient problems. He states “Overall, not only are most research findings false, but, furthermore, most of the true findings are not useful.”

So what are the solutions? Ioannidis suggests less research, but of higher quality, as part of the answer. He remarks upon, as we have seen in chiropractic, the muddying of the water by incentivizing (coercing) students and residents to publish something – anything! People who aren’t committed to a scientific question should not be writing about it, and institutions should stop pushing them to do so.

If we really want to improve the research enterprise, however, Ioannidis argues that we need to conduct more research on research to understand how and why it is done well or not so well. (4) This requires questioning and looking for ways to improve systems that previously were unquestionable – peer-review, competitive grant competitions, etc. Meta-research also brings together technologies and disciplines in novel combinations, increasing the likelihood of new perspectives on old practices.

The growth of our own collaboration, the Global Chiropractic Research Enterprise Initiative, is testament to our own discipline’s continuing curiosity, and also recognition that we want to and can improve the way in which we serve science.

  1. Ioannidis JPA. Why most published research findings are false. PLoS Medicine. 2005;2(8):124.
  2. Ioannidis JPA. Why science is not necessarily self-correcting. Perspectives on Psychological Science. 2012;7(6):645-654.
  3. Ioannidis JPA. Why most clinical research is not useful. PLoS Medicine. 2016;13(6):e1002049.
  4. Ioannidis JPA. Meta-research: why research on research matters. PLoS Biology. 2016,16(3):e2005468.

Leave a Comment

Your email address will not be published. Required fields are marked *