The Blockbuster Model is Dead! Long Live the Blockbuster Model!

The recent failure of bapaneuzumab is yet another reminder of a large bet that didn’t pay off. While the reasons for a decision to continue development, especially in the face of equivocal early results, are complex, a more central question is the viability of the blockbuster model and what the alternatives are.

Reasons why the blockbuster model has become obsolete include having picked off the low hanging fruit, new therapeutic interventions that are highly specific and narrowly targeted such as monoclonal antibodies , and increasing importance of economic justification for market success (NICE, formulary tiers, step edits, pre-auths). Goodbye, mega-trials, hello, n-of-1 studies.

The smaller studies that many feel portend the future of the industry are dramatically different from the large studies that have propelled the industry to success in the past. First, smaller efforts have a shorter and more intense decision cycle. Recognizing that most studies are failures, we need an early indication of success, and we need means of projecting where a study will end up even as it is in progress. This is especially critical for phase II studies, which are often run far longer than needed to detect a signal because of traditional methods that focus on p-values as a measure of success. Rather, we need to utilize methods that can project the likelihood of success from the outset, and that likelihood needs to be updated with each new piece of information. .

Second, study mechanisms need to be highly flexible and responsive. As new information accumulates, we need to be able to respond quickly, whether cutting off a dosing arm, modifying the population eligible, resizing the study, or any number of other changes.

Third, to support this decision-making, information flow has to be far broader and faster: when something occurs in the field, whether a screen failure or data generated, we need that information in hours to days. As an industry, we tend to be shortsighted on both. Data generally takes weeks to months (or longer) since generation (note this is NOT entry into an EDC system, which is the wrong measure), we focus on data to the detriment of information, and our standards for project management either are lamentably poor when measured against the rest of the world.

This is fundamentally an informatics issue, but consider the simple elements that

(1) much of the information most critical to study success (such as detailed information about enrollment and study retention factors) is not measured at all, and

(2) data itself takes weeks to months, sometimes longer, before it is suitable for decision making. The simple concept of how data is refined to actionable information—what each individual needs to see to do his or her job—is lacking entirely. An avalanche of raw data, even in colorful graphs and charts, is not information.

But the good news in all of this is that there are indications that all three of these measures are beginning to be addressed. Given that, why is anybody, especially big pharma for whom these issues are central to their very survival, why would they continue to apply the blockbuster model of large, expensive, monolithic study platforms to an environment where they clearly are failing?

More on the alternative in future posts.

http://www.fiercebiotech.com/story/herper-alzheimers-bust-bespeaks-pharmas-bad-rd-betting/2012-08-09

Share Button

Leave a Reply

  • (will not be published)