The Importance of Information-Flow: The Case of the Thwarted Startup

In my last post, I discussed the cultural problem of excessive risk-aversion preventing pharma and CROs from using methods that could improve R&D productivity. This issue is never more evident than when applied to the critical area of information flow and decision making. Here’s an example of how a potentially important drug failed in a manner that might have been prevented if the company had good information flow, based on capabilities that existed at the time.

The overall study results produced a p-value of slightly greater than 0.05, which unambiguously spelled disaster. But a closer look revealed a single site in Europe with results that differed from most of the other sites. The company’s former CEO claimed that these deviations were due to failure to radiate patients at the time required by the protocol, and that exclusion of the results from this single site dropped the overall study results below the magic 0.05 level. It was unclear whether the outliers were actual protocol deviations or just extremes, and the FDA rejected the NDA. The CEO insisted the drug was successful and important for patients, and filed its application “over protest” from the FDA, submitting an analysis that excluded the sites felt to have performed poorly.

Without first-hand knowledge of the study, it is impossible to say whether the CEO’s interpretation of events was correct. However, one thing is clear: the sponsor and the team managing the study did not find out about serious problems in the field until it was too late to intervene and save the study. The outcome might have been dramatically different if problems had immediately been detected and corrected.

This case is a perfect example of how lack of access to timely performance metrics unnecessarily adds risk. Typical site monitoring schedules leave studies open to potentially devastating failures, and most EDC systems focus on collecting data to the exclusion of indices of how a site is performing, including key measures that affect study cost and timelines such as enrollment. Such indices are far more important for ensuring effective study management than subject data yet our industry generally regards these as an afterthought at best. The most regrettable part of this approach is that the means to avoid such problems exist and have been long used, albeit not widely.

Similarly, you might remember a high-profile story from a couple of years ago where a large biopharm company received a 483 (FDA notice of deficiency) for incorrectly dosing pediatric patients. This was another protocol failure that could have been corrected by getting timely, accurate data in readily interpreted reports that alerted the sponsor when the study started down a path that was the last thing the sponsor wanted or expected. Processes that could have allowed prompt intervention and prevented recurrences of such unfortunate events do exist. The question is why pharma companies that have invested heavily in basic science, preclinical and first-in-man studies allow costly later phase studies to proceed without ensuring access to accurate information about what is happening in the field. Fortunately, proven technology and processes exist that can powerfully reduce the risk of such errors.

In my next post, I’ll talk about other operational issues that remain the most immediate and powerful means of enabling faster, less risky, and more efficient study conduction.

Share Button

Leave a Reply

  • (will not be published)