Breaking the Efficiency Barrier

International Clinical Trials

international_clinical_trialsMichael Rosenberg of Health Decisions, Inc discusses ways in which the pharma industry can resolve efficiency barriers and provides a useful analogy that demonstrates how the drug development process can continue to redefine itself and grow.

February 1, 2011 - Despite enormous expenditures of effort, time and money, the pharma industry produces a disappointingly small number of approved new drugs. The industry has long recognised that its business model is unsustainable unless it can learn to produce greater numbers of new drugs at substantially lower cost. Thirty-four new drugs and biologics won FDA approval in 2009 (1). This output seems disappointing when drug and biotech companies spent more than $300 billion on R&D from 2005 through to 2009. Despite a vigorous quest for faster, cheaper and less risky development methods, the industry has met an efficiency barrier that seems unbreakable.

Although efficiency-challenged, the pharma industry understands that personalised medicine must play an important role in its future. But how can an industry whose current development approach struggles to produce medicines profitably for populations of tens or hundreds of millions of patients thrive from developing medicines for mere thousands or tens of thousands? Unless the industry can break the efficiency barrier, the era of personalised medicine may prove to be of more theoretical than commercial interest.

Other industries have encountered and broken barriers that once seemed as impassable as the pharma industry’s efficiency barrier. The ‘sound barrier’ consisted of puzzling and dangerous phenomena that threatened to destroy aircraft as they approached transonic speeds. This barrier stood between the commercial aircraft industry and a new era of popular air travel based on reduced flight times between major destinations. The top speed of traditional aircraft based on piston engines, propellers and straight wings limited the commercial possibilities of intercontinental air travel. Propeller blades suffered from dangerous stresses due to turbulence and shock waves as their tips approached transonic speeds. Perhaps even worse was the power-toweight ratio of piston engines. As a traditional aircraft neared the speed of sound, further acceleration required increasing the weight of a piston engine by an amount that offset the incremental power output of the attached propeller. Aircraft manufacturers could spend more money on traditional design and buy bigger piston engines and stronger materials, but they could not make traditional aircraft approach or break the sound barrier. The aircraft industry had no choice but to face facts: speeds beyond Mach I required new engine technology, new principles of aerodynamic design and new manufacturing techniques.

In clinical development, increased spending on traditional design, operations and technology has failed to break the efficiency barrier. Typical data management practices that mimic the processes of the paper era limit efficiency. Traditional monitoring methods with a high dependence on individual human performance, site visits at fixed intervals and exhaustive data checking impose further limits. Traditional designs that restrict access to information until the very end of a study rule out mid-course changes, condemning studies to pursue the original plan to its conclusion regardless of how inefficient the plan turns out to be under actual study circumstances. Such traditional methods are the building blocks of the efficiency barrier in clinical development. It follows that we cannot expect such methods to break the efficiency barrier. Just as breaking the sound barrier required a new approach to aircraft design and manufacturing, breaking the current efficiency barrier requires a new approach to designing and conducting clinical studies.

DIGESTING INFORMATION BY STUDY ROLE

Table 1Adaptive study designs, adaptive operations and an associated technology base can revolutionise clinical research just as new aeronautic designs, new engine technology and new manufacturing methods revolutionised aviation. An advance in the technology base must come first. In clinical research, the technology required to increase efficiency must enable each of us to do our jobs better. Improved data capture is essential but is only one step towards a solution. The technology required to break the efficiency barrier must collect, digest and present the information that each of us needs to carry out our jobs. The output of the process that begins with data capture must be information specific to the individual roles in clinical development. The information needed by senior managers differs from those of project managers, CRAs and site staff. The information that all of these groups need in order to conduct studies with greater efficiency goes well beyond patient data. Performance metrics are essential to allow members of the study team to understand what is happening in a study and react to keep the study on track. Such performance metrics may be direct measures, such as the query rate, or derived, such as a site performance index that incorporates several other metrics and weights each measure based on management priorities (see Table 1 on the right).

In early aviation, pilots had relatively little information about flight status – certainly no radar and no GPS. If a flight encountered unexpected weather conditions and dense fog, a pilot could make few adjustments. Not surprisingly, many flights went off course, and some ended in disasters that would not occur today. For example, a 1938 TWA flight encountered bad weather and flew into a mountain in Yosemite National Park (2). Managers of today’s clinical studies are like pilots of a bygone era flying blind, without the aid of information from modern instruments.

CHANGING COURSE IN CHANGEABLE CONDITIONS

Like a flight plan, a study design defines an optimal course based on knowledge available at the time of planning. Efficiency and safety are major considerations both in air travel and clinical development. However, the pilot in an aircraft cockpit has a continuous flow of information to assess how well the flight plan is working out in practice and to decide on changes to ensure the most timely and efficient arrival at the destination given actual conditions encountered. If strong headwinds at the planned altitude are reducing airspeed and wasting fuel and time, the flight can move to a more favourable altitude. If radar indicates severe weather along the planned flight path, the pilot can consult with air traffic control and navigate around.

Clinical studies benefit from few equivalents to the real-time information that pilots see in today’s cockpit. Furthermore, traditional plans for clinical studies have allowed little flexibility. The lack of information and flexibility go hand in hand. There has been neither detailed knowledge of deviations from the conditions planners assumed nor information to allow identification of the types of adjustments that might allow the study to achieve its goals despite unforeseen conditions. Although study plans typically remain rigid, technology can now provide a basis for a different approach. Near real-time information about study status, both for individual sites and for a study as a whole, allows study managers to optimise the course of a study given actual conditions. Midcourse performance metrics may reveal trends in key operations, such as enrolment, allowing adjustments in strategy and tactics. With appropriate safeguards, studies can also take advantage of midcourse patient data to assess the accuracy of planning assumptions and, if necessary, adjust study design elements to keep the study on track. It is no longer necessary for study management to fly blind or for sponsors to remain in the dark about the status of their studies.

ADAPTING STUDY DESIGNS

Adaptive study designs allow an assessment of planning assumptions in the light of actual study data and can be adjusted accordingly. For example, the size of the treatment effect is one key parameter used to determine the sample size required to ensure the ability to detect a difference between the test drug and the comparator. If a data management committee determines that the treatment effect is greater than anticipated, the committee can recommend a reduction in sample size to conserve resources and avoid exposing patients unnecessarily to an experimental treatment. If the treatment effect is smaller than anticipated, the committee can recommend an increase in sample size to prevent the study from proceeding to its conclusion with no hope of producing meaningful results. The industry has traditionally ‘overbuilt’ studies to allow for the possibility of a smaller than expected treatment effect. Enrolling a sample size perhaps 20 per cent greater than believed necessary guarded against the possibility of a failed study, but was obviously a barrier to efficiency. The ability to adjust sample size in the middle of the study based on actual study data ensures an adequate sample size without waste.

By providing earlier access to information, adaptive methods allow cutting losses earlier. Consider a typical case involving a midcourse look at data to determine whether the size of the treatment effect requires sample-size re-estimation. Suppose the treatment effect is not only less than expected, but dramatically so. This presents an opportunity to terminate the study early rather than continue on a futile course. However, if the treatment effect is lower than expected, but within a reasonable range, the appropriate course may be to increase sample size. This saves the study by ensuring statistically significant results. If the treatment effect is exactly as planners estimated, the sample size can be reduced by the buffer (traditionally 20 per cent) that is built into studies during planning because of uncertainty inherent in the estimate of effect size. In this case, sample size re-estimation eliminates the need for that buffer, reducing sample size by 20 per cent. If the treatment effect is stronger than planners estimated, savings from reducing sample size are available. Thus, the use of sample size re-estimation saves a minimum of 20 per cent compared with the traditional approach that provides for a larger buffer population to guard against a failed study.

Planning an adaptive study requires a systematic effort to identify potential scenarios and contingencies (3). Tools such as decision trees and simulations can aid planners by increasing their understanding of the possibilities and providing an opportunity to think about how the plan can provide for appropriate responses. It is always advisable to consult with regulators before employing an adaptive design. However, using midcourse data for early termination for futility and for sample size re-estimation has won wide acceptance. Other important types of adaptive designs include techniques for dose finding, randomisation and seamless combined phase studies. Adaptive designs can improve dose selection for a confirmatory study by making it economical to test a greater number of dosing levels, pruning the least effective or safe doses early, while continuing the treatment arms with the most promising doses. Identifying the optimal dose in a more precise fashion increases the likelihood of success in the next phase of development. With an appropriate design and regulatory approval, a Phase IIb dose-finding study may continue seamlessly into Phase III testing of the doses offering the best balance of safety and efficacy – indeed, this is a step toward the ideal of continuous development. Eliminating the gaps between phases can save as much as one third of the development time and cost. Conducting the next study with the optimal dose can make the difference between success and outright failure. Although every study is different, it is rare that an adaptive technique cannot provide some reduction in timelines, cost or risk.

ADAPTING OPERATIONS

Adaptive designs require prior regulatory approval and allow selective midcourse changes based on an interim look or observation of patient data. The adaptive approach to operations is more readily accessible because it does not require regulatory approval and often provides greater benefits because it optimises comprehensively and continuously rather than, like most techniques for adapting designs, selectively and at intervals. Any time a continuous flow of performance metrics indicates an opportunity to increase the efficiency of operations – whether in enrolment, monitoring, query management, site closeout or database lock – the study team can analyse role-specific information and take action. Adaptive operations represent continuous refinement of all study processes, based on continuous field performance metrics. They may include major changes in ill-conceived enrolment strategies and the like, but more often involve many small adjustments, like the many small changes needed to keep an aircraft on the optimal course.

Adaptive enrolment uses continuous feedback to identify what is successful and what isn’t, reveals strategies and tactics, encourages study-wide adoption of the most successful approaches, including messaging and placement of advertisements, and focuses resources accordingly. If tracking information indicates that a noncritical inclusion/exclusion criterion is depressing enrolment, there is an opportunity to consider modifying that criterion. A tiered enrolment strategy that prequalifies a second group of sites for activation if necessary can often prevent enrolment delays. Timely tracking information can provide a signal for rapid activation of second-tier sites before the study slips behind enrolment targets and timelines.

The net improvement from energetic use of adaptive enrolment techniques can be profound. Centerwatch surveys indicate that only 10 per cent of studies in the US enrolled on time (4). Studies outside the US do marginally better, with 14 to 17 per cent of studies enrolling on time (5). By contrast, the systematic use of adaptive enrolment based on detailed, timely tracking information has enabled one CRO to enrol 83 per cent of its studies on or ahead of schedule (6).

Rather than conducting site visits at the same fixed interval for all sites, adaptive monitoring allocates site visits based on measures of the quality and quantity of data. Making this approach work requires a blend of continuous remote management by a central team with dynamically allocated site visits. The approach enables very rapid feedback when improvements are possible, involvement of a team that reduces a study’s vulnerability because of limitations of individuals, and substitutes the speed and accuracy of computers to enable more sophisticated analyses such as trending.

Improved technology is the key enabler for remote monitoring. For example, remote monitoring becomes far more effective if technology collects performance metrics on sites and generates reports that provide monitors with insight into current study status at each site. Data capture technology can also provide powerful support for remote monitoring. Studies that use a digital pen or digital tablet may be able to use electronic source documents. Remote monitors can review those documents rapidly, perhaps on the same day as the patient visit recorded in the eCRF. With continuous remote monitoring as an anchor, problem sites receive earlier attention – including, as necessary, increased telephone and email contact – and earlier, more frequent site visits. The best performing sites may receive fewer site visits at longer intervals. In some cases, especially with close management using operational adaptations based on timely data, adaptive monitoring reduces the total number of site visits required by the study, reducing travel expenses and thus monitoring costs.

Like adaptive enrolment, adaptive monitoring often provides substantial savings, ranging from 20 per cent to as much as 80 per cent for studies with a high proportion of electronic source data. Adaptive operations can provide such great improvements in efficiency in part because traditional clinical studies institutionalise waste and take the need for substantial amounts of rework – redoing a task that was done incorrectly the first time – as a given. The typical query process is one notable instance of institutionalised waste through often needless rework. The flaw in the query process is the focus on resolving queries one at a time after the fact. It is far more productive to use a flow of continuous information to identify the problems that lead to errors and queries and then to eliminate these problems at the source, thus greatly reducing the number of queries. Having no query to resolve is far more efficient than any approach to resolving individual queries.

CONCLUSION

The adaptive approach will enable the pharma industry to break the efficiency barrier and realise a brighter future. The combined use of sample size re-estimation and adaptive enrolment enabled one study of a new treatment for metastatic breast cancer to save a year’s development time, increasing the period of marketing under patent protection and increasing revenue projections for the life of the product by $1 billion. In a device study, adaptive operations saved $40 million in development costs and allowed an innovative young company to bring a diagnostic product to market two years ahead of a competitor, leading to a lucrative acquisition and an early, highly profitable exit for the young company’s investors. In recent decades, a seemingly impassable efficiency barrier dimmed the future prospects of the pharma industry and appeared poised to end the industry’s golden age. By moving to combined use of adaptive operations, adaptive designs and supporting technologies, the industry can make the efficiency barrier a footnote in its history rather than a leading theme in the decline of one of the world’s great industries.

Reference

  1. Pharmaceutical Industry Profile 2010, Pharmaceutical Research and Manufacturers of America, Washington DC, US, PhRMA March 2010
  2. Beitler S, Yosemite National Park, CA Airliner Crash, March 1938, GenDisasters, May 13, 2009. Available
    from: http://www3.gendisasters.com/california/12691/yosemite-national-park-ca-airliner-crash-mar-1938
  3. Rosenberg MJ, Chapter 7: Planning Adaptive Programs, in The Agile Approach to Adaptive Research: Optimizing Efficiency in Clinical Development, February 2010
  4. Centerwatch Surveys of US (2009, n=950) investigative sites
  5. Centerwatch Surveys of Asian (2006, n=156), Latin American (2005, n=317) and European and Canadian
    (2006, n=356) investigative sites
  6. Rosenberg MJ, Adaptive Enrollment and What It Means to You, Presented at the DIA Clinicial Forum, Lisbon, 13 October 2010