Operations and the Future of Adaptive Research

The Monitor – Association of Clinical Research Professionals

ACRPMonitorBy Michael Rosenberg, MD, MPH

December 2009 - After more than a decade of effort expended and billions of dollars invested, a development program has failed to produce data. The science seems promising and the experimental design seems as if it should function as intended, but a huge delay in continuing the work has occurred because no one involved in the program’s mundane operations recognized certain serious problems before they reached a scale that compromised the entire project.

Which big pharmaceutical company got the bad news this time? Which CEO came under attack because of plunging share values? Hint: The orga­nization’s main office is in Switzerland. Clinical research professionals will immediately guess Hoffmann-La Roche or Novartis. However, the disap­pointed organization is not a pharmaceutical company, but CERN, the European Organization for Nuclear Research.

The delay affects the Large Hadron Collider, the 27-km particle acceler­ator under development for 15 years at a cost of $4.3 billion, far in excess of the originally budgeted $1.6 billion.[1-3] Physicists hope that the world’s most powerful accelerator will produce new knowledge about the funda­mental structure of the universe, but that will have to wait. After shutting the accelerator down in September 2008, CERN plans to restart the machine at reduced power in mid-November 2009. Experiments at full power must wait until 2011.[4]

After years of criticism about delays in clinical development projects, clinical researchers may be relieved to know that other scientists can also exceed huge budgets and long timelines. Indeed, if the Large Hadron Col­lider were a drug, its costs and timelines would be worse than average.

For the collider, the problems were poorly soldered splices and leaky helium hoses.[5] The lesson for clinical researchers is that advanced science and sophisticated experimental design do not inoculate studies against operational issues. Mundane problems routinely delay experimental results on new drugs and biologics. Problems such as slow enrollment, ineffective management of remote sites, and wasteful patching of things not done right the .rst time add years to drug development programs.

The Inadequacy of Operations-as-Usual in Adaptive Studies

So why does this publication’s issue about the future of clinical research have an article that discusses operational shortcomings? The reason is sim­ple: Suboptimal management of operations in clinical studies compromises and even threatens the future of adaptive research, which is the best hope for desperately needed improvements in the efficiency of clinical studies. While adaptive research focuses exclusively on increasing efficiency by using data collected during a study to change some aspect of study design, four important realities loom:

  • Study designs are only one source of inefficiency in clinical studies;
  • Inefficiency in study operations is widely reported, with monitoring costs high[6] and 80% of studies failing to enroll on schedule;[7]
  • Typical operational processes leave program managers and clinical research associates (CRAs) without the timely information they need to conduct adaptive studies effectively;
  • Typical operational processes can block or limit the ability to execute timely design adaptations.

Thus, any meaningful effort to improve the efficiency of clinical research must take the managerial and informational challenges of operations into account. Creating a sophisticated plan to adapt the design of a clinical study is not enough. Because of the historical hands-off approach to managing clinical studies, study operations evolved without the need to deliver timely, accurate, midcourse data. In practice, typical operational practices may not be up to the challenge of delivering the data required for decisions about implementing design adaptations during studies.

How Typical Operations Compromise Design Adaptations

Delays in access to clean, accurate data can block successful execution of common strategies for adapting study designs. The most common design adaptation is sample size reestimation. In one study, data about such parameters as the variability of the treatment effect would have allowed reducing sample size from an initial estimate of 1,020 to 548 patients. However, by the time study operations delivered the data, statisticians recommended the reduction, and study managers informed sites, the study had already randomized 787 patients. The study enrolled an excess of 239 patients. Operations-as-usual limited what could have been the benefits of sample size reestimation.[8]

Realizing the benefits of adaptive safety testing also requires highly efficient operations. The best known adaptive approach, the Continual Reassessment Method (CRM), adjusts the dose based on the response of each individual patient, rather than awaiting response data on cohorts of three. CRM also allows starting partway up the response curve and adjusting up or down, rather than gradually proceeding upward from an extremely low starting dose.

For obvious reasons, methods like CRM are most valuable when treatment response is rapid. As soon as the data for the previous patient are validated, researchers use the planned adaptive strategy for determining the dose for the next patient. If there is a safety issue, the dose may be reduced. If efficacy is lacking, the dose may be increased. However, dose determination for the next patient hinges on receipt of valid information on the response of the last patient treated. If operations fail to deliver such information promptly, large potential time savings from adaptive dose findings evaporate.

The key benefit from obtaining accurate response data faster is shortening decision cycles. If earlier data availability reduces dosing decision cycles from two weeks to two days, over 15 dosing cycles, the time savings would be 12 days x 15 cycles = 180 days, or six months. Regardless of the dose-finding technique, if patient response is rapid, shorter decision cycles will accelerate safety testing.

When design adaptations fail to come off as planned, such as when data needed to reestimate sample size arrive late, study designers can blame study managers for failing to deliver appropriate data when needed. Study managers, in turn, can insist that the study team make greater effort to meet the demands of adaptive studies. However, calling for greater effort is unlikely to solve the problem, especially when it is doubtful that the underlying problem is lack of effort.

Another ready excuse for failing to provide timely data for design adaptations is to blame the information technology professionals responsible for selecting the data capture system. The shortcomings of the trial management system’s functions for query processing and data management make another attractive target for blame. However, the cycle of blame will not ensure the ability to execute design adaptations as planned.

Ensuring efficient operations requires providing study managers and CRAs with the timely information they need to manage effectively. Effective operational management is the foundation for successful execution of design adaptations.

The Adaptive Solution to Operational Problems

The only reliable way to bring study operations up to the standard required for design adaptations is to apply the same principles to both. The ability to execute midcourse operational adaptations is the secret to reliably realizing the benefits of design adaptations. This requires providing study managers and CRAs with timely information. Study managers need timely information on the status of all key operations, including enrollment, query processing, monitoring, and site closeout. CRAs need a continuous flow of timely information on the status of operations at each of their sites—enough information to transform CRAs from box checkers to site managers.

Empowered by a continuous flow of such information, proactive study managers and CRAs can bring operations up to the performance level required to ensure availability of timely, accurate data as needed for design adaptations. Peter Drucker stressed the importance of managing based on performance measurements back in 1954.[9] The lesson still needs wider acceptance in clinical research.

Indeed, the leading cause of inefficiency in study operations is the lack of timely information. When enrollment lags, study managers and CRAs generally have little information about the reasons. There are usually few metrics on the many factors that influence enrollment rates, such as the effects of each inclusion/exclusion criterion, different recruitment messages, choice of advertising media, and so on. Even after identifying some investigational sites that are performing better than others, available information may not allow a rapid understanding of the reasons for success at some sites and failure at others. CRAs may have to do extensive, time-consuming detective work to trace enrollment delays to their source at each site.

Lack of timely performance metrics also delays query processing and data management tasks. Managers often have only superficial information about queries, yet experience shows that queries generally have recurring patterns. The recurrence indicates an underlying cause. The culprit may be a poorly designed case report form, a poorly worded question, inadequate training in study procedures, sites using untrained substitute personnel, or numerous other issues.

If timely, detailed status information on operations allows detecting the pattern early, study managers and monitors can identify and correct the queries. With fewer queries, CRAs can function more effectively as site managers, ensuring that each site is performing at a high level. Common challenges in studies that adapt design elements in midcourse, such as achieving timely database lock for interim looks at data, become much easier.

Appropriate use of operational adaptations differs from the use of design adaptations in important ways. The use of design adaptations should be highly selective, to address issues specific to each study; sometimes, design level adaptations are not appropriate. Design adaptations such as adaptive dose finding may require continuous decision-making, but decisions on most design adaptations come at intervals, often prespecified, as when sample size reestimation is to take place after enrollment of half the estimated study population. On the other hand, studies must use operational adaptations continuously and comprehensively, to tune every aspect of operations whenever performance metrics show the need.

The continuous, comprehensive use of operational adaptations greatly increases efficiency in its own right. For example, when timely information on factors influencing enrollment is available, adapting operations can make enrollment far more efficient. Enrollment for sexually transmitted disease (STD) studies can be notoriously difficult. However, close tracking of one STD study in the U.S. quickly revealed that one site was outperforming the rest. That site used a strategy unlikely to originate in the offices of major pharmaceutical companies: posting flyers for the study in nightclub restrooms. Once the successful strategy was identified, study managers encouraged other sites to follow suit. As a result, study enrollment completed two months ahead of schedule (Figure 1).

img

An improvement like that seen in the STD study would have been far more difficult to achieve without access to detailed tracking information on factors affecting enrollment, including all the strategies used at each site and the results. A CRA starting from square one in identifying the successful strategy would likely meet roadblocks. Telephone calls might not immediately reach the person at each site with detailed knowledge of enrollment issues. Specifying ad hoc reports and asking each site for the information that those reports require would take even more time.

Leveraging Common Infrastructure

The success of both design adaptations and operational adaptations depends on the availability of timely, accurate information. For operational adaptations, the key data consist of performance metrics on all aspects of study operations with the potential to affect costs, timelines, and data availability. The data used to generate such performance metrics originate in the same fast, accurate data capture and validation systems that should be used to collect and validate patient data for design adaptations. Once the required infrastructure is in place for design adaptations, it is foolhardy not to use the same infrastructure to adapt operations for maximum efficiency.

However, merely having the infrastructure in place is not enough. There must also be an efficient, integrated system that allows resolving queries almost immediately rather than storing dubious information until a CRA performs a scheduled visit to the site responsible for the discrepancy.

The trial management system must continuously and automatically generate a wide range of performance metrics as data are collected. When enrollment is slow, for example, it is important to know what is working and what is not: best practices to share, best awareness strategies to back with additional financial resources, reasons for screen failures, and so on. When a patient does not show up, there must be immediate recognition of the missed appointment and action to retain the patient. Information on enrollment must be immediate, accurate, and actionable, preferably with a different perspective for each individual role. The CRA may focus more on helping the site with proper procedures; a study manager may be more concerned with allocating resources to get patients in the door.

Adapting Both Design and Operations

There is synergy between design and operational adaptations. Efficient operations supply the timely, accurate information essential for design adaptations. Design adaptations allow studies to focus operational effort where it is most rewarding. For example, using sample size reestimation to reduce the patient population and adaptive enrollment to optimize recruitment messages delivers a powerful one-two punch against inefficiency. It is the same concept as optimizing a car’s gas mileage: Automobile designs that reduce weight and drag are essential for improving efficiency. However, even with the best design, fuel efficiency will suffer if the driver runs the air conditioning with the windows open.

Applying the adaptive approach to both design and operations provides managers and CRAs with a more coherent, comprehensive view of the challenges and possibilities for each study and each site. When this combination is supported by an infrastructure that provides timely data and performance metrics, CRAs can better understand study trends earlier, achieving new levels of agility.

Earlier knowledge also provides sponsors with greater control over their studies and minimizes risks that flow from typical research practices, rather than the inherent properties of the test drug. In particular, adaptive techniques reduce the risk of nasty surprises at the time of database lock—learning that the sample size is too small because the observed treatment effect turned out to be smaller than estimated, or that previously unknown issues in the quality of the data jeopardize the ability to draw meaningful inferences about efficacy and safety.

Toward the Ideal of Continuous Development

In the coming years, clinical programs will evolve to a more continuous model, proceeding without interruption until the candidate drug either fails in testing or satisfies the requirements for marketing approval. The burden on program planners will grow substantially, and will equip study managers and CRAs to meet the challenges of a faster pace and more decision points, more effectively managing programs, studies, and sites. The reward will be greater efficiency, more precise control, and substantially reduced risk.

Realizing the vision of continuous development will require a library of techniques for adapting study designs based on observed data. This will, in turn, require study operations to adapt to deliver information needed for decisions on implementing design changes. With earlier access to more and better information, proactive study and program managers and CRAs will usher in an era of unprecedented agility in clinical research. Many new drugs will either fail in testing earlier, or reach market sooner and at lower cost.

img
Home Study article. Michael Rosenberg, MD, MPH, discloses that he has no actual or potential conflict of interest in relation to this article.

References

  1. Overbye D. 2009. Giant particle collider struggles. The New York Times, August 3, 2009.
  2. LHC: The Guide. CERN Communication Group. February 2009.
  3. Maiani L. LHC cost review to completion. CERN, October 16, 2001.
  4. The latest from the LHC. The Bulletin (CERN), August 3, 2009.
  5. LHC to run at 3.5 TeV for early part of 2009— 2010 run rising later. CERN press release, June 8, 2009.
  6. Malakoff D. 2008. Spiraling costs threaten gridlock. Science 322: 210–3.
  7. Stover, D. E-recruitment: trial by wire. Bio-ITWorld; available at www.bio-itworld.com/ archive/031003/insights_recruitment.html.
  8. MacDonald TM et al. 2008. Effect on blood pressure of lumiracoxib versus ibuprofen in patients with osteoarthritis and controlled hypertension: a randomized trial. Journal of Hypertension 26: 1695–1702.
  9. Drucker P. 1954. The Practice of Management. New York: Harper & Row.

Michael Rosenberg, MD, MPH, is founder, president, and chief executive officer of Health Decisions, a global contract research organization specializing in high-efficiency adaptive solutions. A leader in adaptive research for more than 20 years, he focuses on integrated technology and agile clinical development methods that apply adaptive principles to both trial design and operations. His new book on this topic will be released early next year. He can be reached at mrosenberg@HealthDec.com.