Drug Baron

June 24, 2013
by admin
0 comments

DrugBaron was intrigued to see a paper published in PLOS One on June 20th entitled “Why selective publication of statistically significant results can be effective”.  After all, selective publication of positive results is one of the key phenomena that underpins the replicability crisis that afflicts science.  Could it really be that positive publication bias is a good thing?

The paper is based on a simulation, and nicely demonstrates that selectively publishing positive effects results in an increased risk of poorly replicatable findings (the so-called Proteus Phenomenon).  That much is uncontentious.

But they go on to show that, over time, the scientific record is more “content-rich” with a positive publication bias.  In this context, content-rich means that a smaller number of published studies are required to accurately estimate the “real” effect than would be the case if every study was published.  In other words a smaller scientific record is required to “know the truth” with positive publication bias than without.

This finding, it seems, justified their grand title.  “Effective”, according to the authors, therefore means that keeping the scientific record small and content-rich is the dominant goal for society.

That depends on your viewpoint.

For academic science, establishing truth eventually is all that matters.  Speed is not critical, and even if the first few papers on a topic turn out to be poorly replicatable, as long as the scientific records as a whole eventually yields the truth in some kind of grand meta-analysis then all is well with the world.

And certainly historically, when the only way to find anything was to stand for hours in dusty libraries consulting large, musty leather-bound tomes, having a small content-rich scientific record was important (and indeed inevitable given the marginal cost of publishing in a pre-internet age).

But the world has changed.  Today, content can be filtered easily, millions of records searched, and related content grouped in a myriad of different ways.  Small and content-rich doesn’t carry the currency it did even a decade ago.

The relative harm of the Proteus Phenomenon also depends on the use to which the data is being put.  For the grand academic goal of establish truth eventually, a little early “fluctuation” in the conclusion doesn’t matter.  But for those scouring the frontiers of science for the next big thing (which VCs and pharma companies do all the time) the lack of replicability of the latest findings are VERY harmful.

Almost every exciting new finding that you pick up and play with turns out to be less bright and shiny than it initially appeared to be.  For many years, keen not to miss out on the latest, greatest science commercial players (whether large pharma, smaller biotechs or VCs) would grab new discoveries and build projects or even whole companies around them.  But too many times the key observations proved “fragile” – and much money was expended attempting to repeat the unreproducible.

Positive publication bias and the Proteus Phenomenon are largely, but not solely, to blame for these problems.  Fore-warned by even one negative replication study, these companies would have been much quicker to abandon their efforts when the initial studies failed to reproduce the published findings, saving dollars for the next attempt on something entirely different.

Understanding the degree of unreproducibility that accompanies most “first-in-class” observations would also have promoted reproducing the original observation to the first priority of every new project.  Until recently, it was common practice to start building the next storey from day one, rather than strengthening the foundations.

In this world, the world in which DrugBaron operates, the benefit of keeping the scientific record compact and content-rich is far out-weighed by the fragility of the early conclusions.  Positive publication bias is NOT effective, and papers like that of de Winter and Happee risk misleading the unwary.

Perhaps a better title for their paper would be “Why simulation studies can be misleading unless you are very clear about the criteria that determine a good outcome from a bad one”.