Drug Baron

“Idea Bubbles”: The dangers of big theories in low-validity environments

| 10 Comments

This week, Pfizer and J&J finally discontinued development of iv bapineuzumab for Alzheimer’s Disease (AD) after a second spectacular failure in Phase 3.  That they discontinued the programme surprised almost no-one.  Everyone, it seems, is smart after the event, but the potential for anti-amyloid therapies was not always viewed with such negativity.

Once upon a time, almost the entire Alzheimer’s research community believed that deposition of amyloid into neurofibrillar tangles to form plaques was the causative event underpinning AD.  Dissenters were viewed with the same skepticism as ‘flat-earthers’.  All the evidence stacked up – if you could just reduce amyloid deposition you would prevent or even cure the disease.

Unfortunately, obtaining proof of this hypothesis required vast Phase 3 studies – early clinical studies in AD are a classical example of a ‘low validity environment’ – in other words, early (cheap and quick) results are notoriously poor predictors of late (slow and expensive) studies such as the Phase 3 studies now reporting.  As a result, it was the strength of the amyloid hypothesis (and the weight of its large number of acolytes) that drove global pharma execs to back these trials – despite Phase 2 data that was at best equivocal.

And because the amyloid hypothesis is wrong, and the Phase 2 studies poorly predictive, the whole house comes crashing down spectacularly, with billions of dollars wasted.

If the theoretical underpinning for new trials in these indications are always at risk of “inflation”, how can we play in these areas and still stay “safe”?

There are parallels with another unfolding disaster – the CETP inhibitors.  Here, in another low-validity environment, the HDL hypothesis had gained almost religious support (just like the amyloid hypothesis), and again the flawed hypothesis will result in a dramatic and expensive failure in Phase 3.  The only difference is that the LDL-cholesterol lowering effects of the CETP inhibitors may rescue some value even though the HDL hypothesis was completely off-target.

These late stage failures in indications that demand vast Phase 3 studies are killing the pharmaceutical industry.  What are their origins?  And how do we avoid them in the future?  DrugBaron sees it as more than coincidence that powerful theories grow up specifically in these low validity environments – and the lesson for the pharma industry is to stay pragmatic and recognize that ideas, as well as economic assets, can show bubble behaviour – and the crash that follows an “ideas bubble” is every bit as painful as the conventional financial post-bubble crash.

“Drug development is HARD” tweeted , himself a successful drug developer with Stromedix and now Biogen, in response to criticism of Pfizer for pursuing bapineuzumab into Phase 3.  “Trials fail for many reasons. Nothing ventured, nothing gained.”

He is right, of course.  But drug development is only an economically viable activity if those failures occur predominantly at the early stages of the process when the committed dollars are manageably small.  Failures in expensive Phase 3 programmes demolish returns on R&D investment, cost chief executives their jobs, and repeated too often can bring down even a global pharmaceutical company.  Understanding the factors that underpin such failures is therefore of critical importance to the industry.

Bapineuzumab failed because of the underlying biology.  The antibody itself did exactly what it says on the tin – it effectively clears amyloid.  The problem is that clearing amyloid does not prevent the disease worsening.  The phase 3 programme with bapineuzumab was, in effect, a grand test of the hypothesis that amyloid deposition is a causative pathogenic component in AD.  Whether it was sensible to run the trials at all, then, comes down to a judgment of how likely that hypothesis was to be true.

An “ideas bubble” occurs when, over a long period of time, positive social feedback disconnects the perceived validity from the real underlying validity – in the same way price and value dissociate in a stock market bubble.

It is the second time this year that a drug once touted as a super-blockbuster to rival Lipitor™ in its heyday has disappointed because a biological hypothesis was just plain wrong.  DrugBaron has already examined the failure of CETP inhibitors to deliver the expected benefit in reducing heart attacks.  These molecules dramatically raise HDL, the so-called “good cholesterol” – but the data released so far suggests that does not translate into anything like the expected clinical benefit.   The problem, again, is a flaw in the underlying biological hypothesis that was only properly tested with a massive Phase 3 programme.

The similarities between these stories tells us two things: first, that some indications (like AD and heart disease) are difficult to de-risk early; and second that a hypothesis can become a doctrine to such an extent that people are in danger of forgetting that it is still a hypothesis.

Unfortunately, these two factors interact.  If clinical outcomes can readily be predicted, for example using biomarkers, early in development then an entrenched hypothesis about biological mechanism could be tested early, and relative cheaply.  This is exactly what happened with GSK’s anti-IL5 antibody mepolizumab.  The entrenched hypothesis maintained that lung eosinophils were a pathological component of allergic asthma – and mepolizumab dramatically reduces lung eosinophilia.  But clinical studies showed that mepolizumab had no clinical benefit in the vast majority of asthma patients, and the development focus was shifted to hypereosinophilic syndrome.  The entrenched view of the biology of asthma was wrong, but in this indication at least, testing that hypothesis in a clinical experiment was relatively low cost.

In the same way, if the biological doctrine has already been proven clinically, then its safe to proceed in an indication where early de-risking is almost impossible.  The example here is the class of promising antibodies against PCSK9, a protease that degrades the LDL-receptor.  The lead compound, REGN727, developed by Sanofi entered a vast Phase 3 programme earlier this year.   The risk of failure here is much lower than for the CETP inhibitor programme because the clinical relevance of lowering LDL-cholesterol is already established beyond doubt.

But when an entrenched, but unproven, hypothesis and a challenging indication come into conjunction then the risk of expensive and damaging failure sky-rockets – as the bapineuzumab and CETP inhibitor stories amply demonstrate.

Worse still, an unproven hypothesis is much more likely to become entrenched in the ‘low validity environment’ around these difficult indications.

A “low validity environment” is area where the predictive value of the known facts is relatively weak.

There can be little doubt that AD and coronary heart disease indications fall into this category simple because neither animal models nor early stage clinical studies reliably predict – or perhaps predict at all – the outcome in late stage trials with hard clinical end-points required for registration.

As a result, very few mechanistic hypotheses are ever fortunate enough to be tested in these indications.  Since vast Phase 3 clinical studies in such indications are the sole preserve of the largest pharma companies, they are well beyond the reach of academics supported by conventional funding streams.  This creates a knowledge vacuum – and in the land of the ignorant, hypothesis is king.

Ideas can even be magnified in the vortex of a positive feedback loop – what DrugBaron calls an “ideas bubble”.  Positive publication bias sees experiments supporting a hypothesis published in preference over experiments challenging it (and this in turn makes it more likely that other scientists, mindful of their publication record, will design experiments supporting a widely accepted hypothesis rather than the more logical approach of attempting to disprove it).

Before long, the initial hypothesis has gained enough gravity to attract a whole constellation of circumstantial evidence into its orbit.

And being a low validity environment, the definitive test remains out of reach.  This gives the “ideas bubble” a long time to mature and be consolidated in people’s minds.  Assumptions made early in the story have time to be forgotten, and “helpful” jargon, like “good cholesterol” embody such certainty that the dividing line between hypothesis and fact becomes hazier and hazier.

“Ideas bubbles” are as dangerous as pricing bubbles in other assets, from tulip bulbs in 1637 Amsterdam to dot com stocks in 2000.   In both cases, positive social feedback results in perceived value exceeding real value.  And in both cases, this disconnect is followed by a sharp re-alignment.   In the case of ideas, the bubble leads to the perceived likelihood of a hypothesis being correct far exceeding the real likelihood.  So when the definitive test is performed (the costly Phase 3 programme) the result is almost always failure.

This paints a pretty depressing picture.  The standard of care is much less effective in these “low validity” indications – for the obvious reason that Phase 3 studies are generally larger and more expensive, and have to be based on less certain theoretical foundations.  So the unmet medical need is huge.  Simply avoiding these areas is one solution for the pharma companies, but that leaves patients in need and in any case misses out on the vast profits to be made from successful innovations in these areas.

We must not allow these high profile failures to drive further ‘asset favoritism’, where all the R&D dollars are chasing the same, easier, indications

But if the theoretical underpinning for new trials in these indications are always at risk of “inflation”, how can we play in these areas and still stay “safe”?

First and foremost: do not to let a hypothesis, however strongly held, to trump data.

When the most advanced CETP inhibitor, torcetrapib, tested in combination with atorvastatin not only did not improve matters, but actually caused more events, shocked industry veterans scrabbled around looking for an explanation.  The right answer, of course, was staring them in the face: the entrenched HDL hypothesis was wrong.  But they never even considered that option – instead, they focused on a small hypertensive effect of the torcetrapib molecule and convinced themselves that this “must” have been responsible for the shocking failure.

It was never a plausible explanation.  The gradient linking hypertension with heart attacks was too shallow to quantitatively convert a detrimental result with torcetrapib into a strongly positive one for a CETP inhibitor without this liability.  For sure, it may explain why taking torcetrapib was worse than taking nothing, but not the absence of a huge benefit expected to accrue from doubling HDL-cholesterol.

The biggest mistake, then, was not the brave decision to try a new mechanistic approach in this area of high medical need – as Michael Gilman rightly says “Nothing ventured, nothing gained” – but to allow the entrenched hypothesis to persist when the data indicated otherwise.  Early indications from the on-going Phase 3 studies with anacetrapib, a second generation CETP inhibitor that lacks the hypertensive liability, suggest it will disappoint too.  Running those studies was an avoidable mistake.

The same principle led to the initiation of the Phase 3 trials with bapineuzimab – analyzed in isolation from the entrenched hypothesis the Phase 2 data were clearly negative.  But with spectacles tinted a deep shade of rose by the combination of the widely-held amyloid hypothesis and the size of the AD market, execs ploughed ahead regardless.  The apparent strength of the hypothesis beat the real strength of the data.

Already, in the early aftermath of the bapineuzumab results, there are signs of the same mistakes being made all over again.  Experts suggest that bapineuzumab failed because the wrong patients were recruited.  Maybe once the plaques are established, removing amyloid was ineffective?  Maybe if you treated patients at a much earlier stage of the disease then you would see a positive effect?

Instead of addressing the real problem – that the entrenched hypothesis was wrong – they are ignoring the elephant in the room and looking for the mouse.

From a theoretical stand-point they could be right, of course.  But the data from the existing trials strongly suggests otherwise.  Despite inclusion criteria, the patients enrolled into a typical Phase 3 trial are a diverse group – had bapineuzumab had a powerful effect on a sub-group, this would have been obvious from the data.  But the principal investigator on the 301 trial was quick to emphasize to journalists that this was not a failure of trial design – there was no evidence whatsoever for any efficacy of the intervention.

Sensibly, Pfizer and J&J have discontinued bapineuzumab development completely.  But will others be so sensible?  The academic and pharma communities have a lot (of reputation as well as money) invested in the amyloid hypothesis.  Like the HDL hypothesis, one imagines it will not die easily.

The second safety tip is to listen carefully to the dissenters.   The validity of a hypothesis should be assessed by the rational arguments of the proponents and dissenters, not by the numbers of people in each camp.  Christopher Byrne, now Professor of Endocinology and Metabolism at Southampton University, explained the limitations of the epidemiology data used as the foundation of the HDL hypothesis to DrugBaron more than 15 years ago.  Had the senior pharma executives heard, and understood, those arguments it is hard to be believe that the CETP programme would have got as far as it did.   Certainly, they would have responded to the torcetrapib failure very differently.

Similarly, Professor Bob Mahley, whose team at the Gladstone Institute at UCSF discovered an alternative apoE4 pathway involved in the development of AD, spelled out for DrugBaron the weaknesses of the amyloid hypothesis before news of the bapineuzubmab trials began to emerge.

Do not to let a hypothesis, however strongly held, to trump data. 

Of course, there are always dissenters – and many of those have their own agenda to pursue.  The key, though, is to take the time to listen carefully to their arguments.  The more entrenched a hypothesis has become, and the bigger the risk that an “ideas bubble” has occurred, the more carefully you should listen to at least the most coherent dissenting voices.  And when the data also points in a negtaive direction, listen even more carefully.

Drug development, like polar exploration, IS hard.  Despite that, we need brave – even foolhardy – people to strike off into the most dangerous unexplored lands.  But like a polar explorer, preparation is key.  Give yourself the best chance of success by listening carefully to all those around you, not just those who agree with each other and with you, and above all pay attention to the weather forecast: if the data show storms ahead, it is better to pull out and fight another day than perish in a valiant, but ultimately doomed, endeavor.

 

 

 

 

 

 

  • http://bir-llc.com Kevin McNamara

    Fantastic article.

    -Kevin

  • Alzheimer Researcher

    Great great article. Thre are many articles about why it was a mistake to test bapineuzumab (Herper -Forbes), but they are not quite right and have no scientific basis in their arguments. Yours was fantastic.

    There was a statement that is not correct. “Sensibly, Pfizer and J&J have discontinued bapineuzumab development completely. But will others be so sensible?”

    Pfizer and JnJ have not discontinued bapineuzumab. You have to read the fine print in their statements. They have discontinued bapineuzumab administered invtravenously for mild to moderate AD.

    They are still continuing a study with bapineuzumab administered subcutaneously in mild to moderate AD, and are plowing ahead with a very large trial with Bapineuzumab SUBQ in prodromal AD. In addition, they are continuing their very large ph 2 program (9 ongoing trials) with ACC-001, the second generation of AN1792. And they are continuing a 3rd program, AAB003, that can be dosed higher. All of these target amyloid.

    You could say that the amyloid hypothesis was partially discredited from AN1792 trials, where it also showed no clinical benefit, and upon autopsy, the pts amyloid had been removed.

    • admin

      Thanks for the positive feedback – and also for pointing out the error.

      You are completely correct – the continuing programmes suggest that Pfizer and JnJ are as guilty as the rest of the field who have reacted to the Ph3 failure of iv bapi by suggesting that the problem lies in not treating the disease early enough. It is quite clear from the continuing studies in ‘pre-AD’ that they continue to believe in the hypothesis despite the latest dose of negative data.

      If anything that simply makes the main point of the article even more powerfully. As with the CETP inhibitors, just how much failure will it take before the data demolishes the entrenched hypothesis? Maybe its time to sell your Pfizer shares!

  • http://networkpharmacology.com Network Pharmacology Blog

    This is an excellent article. Ultimately it shows in drug development, science drives the commerce not the reverse. Both the examples discussed are clearly complex disease as are arguably most of the remaining unmet medical need.

    Biomarkers at an earlier stage to objectively confirm or deny hypothesis makes rationale sense but with complex disease identifying properly representative biomarkers is not a trivial challenge.

    As you state the issue ultimately boils down to the underlying biology and how it actually functions or rather malfunctions in the case of complex disease. Ultimately the issue we see is that the traditional drug discovery and molecular biology approach in forming hypothesis is too simplistic. Biological function is increasingly understood to manifest as a result of the output of complicated biological systems within cells, systems predominantly made up of proteomics. The nuts and bolts of this engineering is what needs to be addressed head one to address complex disease.

    We see fresh hope in the form of network pharmacology providing a new basis for informed drug design with suitable biomarkers, as you so correctly highlight, being necessary to avoid money and time wasted on doomed large scale trials.

    How close to the cliff big pharma will have to go before senior execs react to the writing on the wall is hard to judge. One issue about these larger Phase III trials not discussed is the market reaction to do the opposite and not proceeding. Unless they have alternatives, they may look as if they are just running down the clock on their existing franchises and have thrown in the towel on these hard diseases.

    The inconvenience being that easier diseases with genuine market opportunity are in short supply. Whether CNS, cancer, cardiovascular etc etc the bar is high for the next generation of efficacy. Arguably as they abandon research and adopt of strategic model of in-license drugs with requisite data then this burden falls on the biotechnology community.

    Irrespective of who innovates the scientific truth of the underlying biological will not change. The rewards are enormous but entrenched hypothesis without proper clinical validation, as you correctly highlight, wastes time and precious resource to no-ones benefit. Network pharmacology as a discipline offers a change of drug discovery method addressing the realities of the underlying biological and brings fresh hope to the industry.

    • admin

      Thanks.

      I am a big fan of ‘network pharmacology’ (or perhaps even more broadly ‘network biology’, or as it is often called ‘systems biology’). A difficulty which I did not highlight is the schizophrenic tendency of big pharma companies to, on the one hand, only care if a drug ‘works’ or not, but on the other hand to have a powerful need to ‘understand’ what they are doing. Sadly, this understanding usually means that to be progressed a drug must have a simple, ‘pigeon-holed’ mechanism of action that can be described and understood by senior management in half a page of A4. The reality, for most of the prevalent diseases with high unmet medical need, is that we dont have a single disease with a single identifiable cause, but a syndrome with overlapping symptoms and many related causes.

      The solution is a ‘network’ approach to both biomarkers and therapeutic pharmacology. That is why we, at http://www.totalscientific.com, use metabolomics, immunomics and highly multiplexed assay systems to describe the common PATTERNS associated with disease. A handful of biomarkers have far too little ‘information density’ to describe properly these ‘big, complex’ disease entities. Understanding the patterns that underlie disease and then matching a therapeutic network that reverses those changes is the model for future drug discovery, but first it requires a culture shift that says that progressing a drug through development doesnt require a simple A4 summary that can be understood by anyone with or without a scientific background!

      But these network approaches have problems too! While other disciplines have got their heads around ‘big data’ and can safely use the massive multi-dimensional datasets that are yielded by multiplexed assay methodology, or multi-endpoint clinical trials, sadly biologists and clinicians – for the most part – have not. DrugBaron highlighted both the potential and the pitfalls for this network approach in a previous article.

      So I think we are in broad agreement what the problems are, and what the solution could be. Getting from here to there, though, is a monumental task.

      • http://networkpharmacology.com Network Pharmacology Blog

        Thanks for the response and your previous blog on composite endpoints which we totally agree with. This area is laden with terms which can mean different things to different people.

        In our view systems biology as initially termed has slightly failed to deliver on the initial promise and the reason for this is defined in complex systems science which makes up part of the network pharmacology church. Our cells appear to function as a result of intertwined proteomic systems most of which are of a scale to be classified as complex. Complex systems science indicates such systems are robust to random intervention and single targeted intervention but are vulnerable to targeted synergistic points of intervention. Traditional linear means of analysis cannot establish these synergies leading to the new area of complex system science which is only recently been applied to drug discovery. The compare and contrast with conventional drug discovery is touched upon in http://networkpharmacology.com/2012/08/06/the-awkward-truth-of-conventional-drug-discovery/. Systems biology as is now accepted, looked to have the correct outlook and aspirations but lacked the scientific ammunition to cope with the realities of complex biological systems. Finally the tools capable of addressing the challenge seem to be emerging.

        The skills to do the necessary maths are presently very specialised and undertaken by a handful of mathematical biology experts globally and we think this is where the innovation and future may lie.

        Your point on A4 sized mode of action explanation for senior execs sounds like the voice of experience and a real bottleneck to commitment of resources. Hopefully over time biomarkers devised from such an approach can readily be correlated to the disease state (it may be necessary to break this down further into biomarkers specific to stages of the diseases advancement) may help win this internal resources debate. Ultimately when looked at holistically if a given company refuses such drugs to develop, new players or competition will not.

        Such functional systems knowledge should highlight the single biomarkers relevant for data collection and analysis avoiding the composite issues you so clearly describe.

        Clearly the correct maths has a key role to play in both research, drug discovery and clinical development. Let’s hope the necessary resources are channeled behind the initiatives capable of addressing the issues but this will require some objective assessment of what the issues are at the biological level.

  • Kelvin Stott

    While I agree that there are some valuable lessons here, by making sure we validate any hypothesis with hard data according to basic scientific principles, I think we need to be careful not to over-react and discard the amyloid hypothesis altogether.

    Indeed, there is still compelling scientific evidence (i.e., hard data) that aggregation of amyloid *does* cause Alzheimer’s disease, despite these latest disappointing clinical results. However, contrary to the common interpretation of the amyloid hypothesis, which suggests that insoluble amyloid plaques cause the disease, most of the evidence now indicates that a toxic soluble oligomeric form of amyloid (an intermediate in the aggregation process) is actually the primary pathogenic species, and forms non-specific porin-like ion channels (holes) in neuronal cell membranes. As more of these holes are formed over time, the cell has to work harder and harder to maintain equilibrium, but ultimately fails as too much calcium leaks through the cell membrane, causing aberrant signalling, hyperphosphorylation, oxidative stress, inflammation, and apoptosis. It’s a bit like trying to keep a bath tub full as more holes are punctured in the bottom: at some point the water flowing in through the tap can’t keep up with the water leaking out through the holes in the bottom.

    What is most interesting is that a very similar mechanism seems to occur in several other degenerative diseases, including Parkinson’s disease, type 2 diabetes, ALS (motor neuron disease), Huntington’s disease, CJD, and many others. In each case, a specific protein or peptide has been shown to aggregate into toxic soluble oligomers, which form non-specific porin-like ion channels in the affected parts of the brain or body: alpha-synuclein in Parkinson’s disease, amylin peptide in type 2 diabetes, superoxide dismutase in ALS, huntingtin protein in Huntington’s disease, prion protein in CJD, and so on. In fact, the similarity between all these diseases is quite remarkable.

    So, while these latest antibody-based drugs have not been effective in treating AD, I would say that the amyloid hypothesis is far from dead: it may just be that these antibodies target the wrong form of amyloid (insoluble plaques vs soluble oligomers), or due to their size they are unable to reach the amyloid and/or stop or reverse the formation of these toxic pores within the cell membranes…

    Either way, we need hard data to invalidate a hypothesis just as much as we need hard data to validate a hypothesis: that is the most basic principle of science.

  • Pingback: Phase 2a proof-of-clinical-concept trials: the single most important determinant of R&D efficiency in drug development | Drug Baron

  • Pingback: Intuition: Seductive Yet Deceptive |

  • Pingback: Bardoxolone and BG-12: A Tale of Two CITIEs (Covalently-Interacting Therapeutic IntErventions!) | Drug Baron