Journal of Clinical Research Best Practices
Vol. 2, No. 7, July 2006 - Increasing market, financial and regulatory pressures on pharmaceutical companies are generating interest in new, innovative strategies to conduct clinical research more efficiently. The traditional approach to clinical research is to design the trial, conduct the trial, and analyze the data, in three distinct, sequential steps. In contrast, adaptive trials use incoming information to modify the study-in-progress to save time, save money, and even generate more statistically useful data. In addition, adaptive trials improve subject safety by minimizing the number of subjects and minimizing their exposure to less-effective or less-safe products or dosages.
Although the notion of adaptive trials has been around for a long while, only recently has technology made this approach a viable option. In addition, recent advances in statistical tools and regulatory policy support adaptive trials. The primary limitation has been the speed at which we are able to collect data and use it to generate actionable knowledge. Most current data capture, management and analysis systems – notably web-based electronic case report forms (eCRF) – fail to support the timeliness requirements of adaptive trials for data capture and conversion to knowledge.
What Are Adaptive Research Methods?
Broadly defined, an adaptive method is one that allows us to change how a study is conducted during the study. We already use a primitive adaptive method when we perform an interim analysis to assess efficacy or futility. The rationale is clear: Why wait until the end of the study, when important information is available part-way through?
Clinical studies are always designed on the basis of imperfect information that we use to estimate the difference in outcome between study arms, dropout rates, and the frequency of different outcomes. We factor our confidence levels into these estimates, the degree of statistical assurance we require, and a host of pragmatic issues such as tradeoffs in number of sites, population diversity, and enrollment rates. Conservative estimates increase the likelihood of statistical discrimination but require more subjects, wasting valuable resources. Optimistic estimates may generate too little data and compromise a study altogether by failing to adequately differentiate between the test article (including different doses) and comparators. Adaptive methods allow us to refine these estimates based on what we learn during the course of a study.
The FDA gives sponsors broad discretion in how they design early (Phase I and II) studies, including the use of adaptive methods. Indeed, a major purpose of early research it to generate data for designing Phase III pivotal studies, which are so big and expensive that an error in dosing, for example, is very costly. In a Phase III study, we must be much more rigid about factors such as who will be included in the analysis, and we know that a high number of dropouts in an intent-to-treat analysis will diminish the ability to show a difference even if one actually exists.
The most common adaptive methods are:
- Dropping inferior treatment groups (pruning)
- Early stopping (due to efficacy or futility)
- Sample size re-estimation
- Managing enrollment to faster completion
- Optimizing monitoring resources by allocating according to need, as tracked by performance measures Adaptive randomization
- Anticipating and designing follow-up studies before completion of current work, minimizing between-study time
With all but the last of these methods, adjustments are made to a study during its course. The next-to-last method, adaptive randomization, is a continuous function because an adjustment is automatically made after each outcome is determined. An example of this method is “play-the-winner,” where each successful outcome is added back to the randomization pool and each unsuccessful outcome is dropped. (In other words, if, three months into a study, there are 100 Arm A assignments and 100 Arm B assignments left in the pool, and a subject in Arm A completes the study with a positive outcome, the pool of Arm A assignments is increased to 101.) Over time, the study progressively tips towards the most successful treatment arm, thus reducing subject exposure to less-desirable treatment arms. The first three methods are less-refined methods of adaptive randomization because they also alter randomization.
Let’s consider a pruning example. Assume we have a Phase II study in which we believe that meaningful differences will be obtained with a sample size of 80 subjects per group. We have five arms (four doses plus comparator) and an enrollment rate of 50 subjects per month. Each subject costs $15,000. We will also assume that the outcomes require only a short period of treatment. Using a traditional linear (non-adaptive) approach, this study will enroll 400 subjects over 8 months, and cost $6 million (Figure 1a).
Adaptive pruning, however, gives a very different result (Figure 1b). Let’s assume that, after ten subjects in Arm A, it becomes apparent that that arm is undesirable, for safety reasons. From that point forward, the remaining 10 subjects/month that would have been assigned to Arm A are now randomized to the other four arms, increasing their enrollment by 20% each. If we further determine that, after 20 subjects in Arm B, the dose in that arm is ineffective, that reduces the study to three remaining arms and results in ten more subjects per month per remaining arm. If we carry the remaining three arms to the study’s conclusion, the net result is a requirement for only 270 subjects, an enrollment period of 5.4 months, and a cost of $4 million. So we get our answer in 32.5% less time, have 32.5% of our money left in the bank, and have exposed 32.5% fewer subjects to a less-desirable treatment than if we had we done a traditional study.
Requirements for Adaptive Research
The primary component of adaptive research is near-real time knowledge. We need a means to rapidly collect data, clean it, and access it in a form that can be the basis for action. In Figure 2, the dashed line shows a typical timeframe for data collection, while the red line indicates ready availability as data are generated in a linear manner over the course of the enrollment and treatment period. The difference, indicated in yellow, indicates the gap in knowledge. The dashed line indicates the lag between collection and analysis of data, including during the clean-up period at the end of the study when most of the data is available, but cannot yet be analyzed. An ideal system follows the red line – data is available for analysis in near-real time, with a much-abbreviated cleanup period at the end of the study.
A second requirement is the ability to act immediately once a decision has been made. For example, if an arm is dropped, randomization to that arm must stop immediately because each additional subject represents an investment in unneeded information and an unwarranted burden and risk to subjects.
Most current eCRF systems do not address these requirements, and are a step backwards from some “less advanced” systems. Web-based systems are often plagued by slow data entry and do not address most causes of delay in data queries. An important limitation of web-based eCRF systems is that somebody has to sit down in front of a keyboard, remember how to use the system, and enter the data. In practice, doing this in a timely manner in a busy clinical office, often by clinical personnel, has proven elusive. As a result, data are delayed. The more they are delayed, the less the opportunity there is for efficient adaptive research. As Yogi Berra may have said, “In theory, practice and theory are the same thing; in practice, they aren’t.”
One way to address this limitation is to condense the traditional source-document-to-casereport-form (CRF) process into a single step: “eSource.” Figure 3 shows a simple form of eSource: the SmartPen™, an optical pen that allows clinical personnel to fill out paper CRFs. The SmartPen™ is docked, and an exact copy is immediately transmitted, after which specialized software reads the data from the pen and enters it into the study database.
The second requirement, the ability to act by rapidly adapting to shifting randomization, requires a centralized information system that is tightly integrated with data management. With the SmartPen™ system, data is available immediately through a web portal. Query rates run one-tenth to one-third that of conventional web-based eCRF systems and one-twentieth that of paper systems, and the information is available in a matter of minutes, from anywhere in the world.
Adaptive methods can be used with blinded data, but are most effective with unblinded data. Data and Safety Monitoring Boards currently use unblinded data to perform interim safety analyses. The same, or a different group, can be responsible for adaptive decisions.
Apart from the technical requirements, this immediacy of decision-making can prove challenging for those accustomed to making decisions over weeks or even months. Adaptive methods require careful advance contingency planning to prepare for the inevitable twists and turns in any clinical trial. Any surprises will inevitably slow reaction times. In the example above, if a treatment arm shows no effect with the first 10 subjects, taking a month to decide whether that treatment arm should be stopped will cause another ten subjects to be enrolled.
Pivotal studies, being so large, can make excellent use of adaptive methodologies. For example, the duration of a recent multinational evaluation of a treatment for metastatic breast cancer was reduced by about 20% by re-estimating sample size based on observed magnitude of effect. Pivotal studies also tend to focus on very tight management of the study itself, such as subject recruitment. Especially here, the value of the adaptive method is readily apparent: For example, studies using the SmartPen™ system described above, have established industry best practices benchmarks for speed of enrollment in four different therapeutic areas.
Because adaptive methods are nontraditional, study planners may be concerned that the FDA and other regulatory authorities will not accept the results. Fortunately, at least the FDA and European regulatory authorities are receptive, provided appropriate explanations are made available, including statistical modeling. Software for statistical modeling of adaptive studies is now available commercially, and free from academic groups. The study’s IRB(s) should also be consulted. The informed consent form should clearly describe any impacts on the probability of a subject being assigned to each arm of the study.
This article touches only the surface of an approach that is increasingly recognized as being key to improving research efficiency: the confluence of technology, regulatory and statistical developments that focus on deriving and using maximum information as early as possible from each study subject. While the precise details of implementation will differ in each set of circumstances, the ultimate benefit of adaptive methods is enabling precisely that – a very specific, very rapid means of individualizing each study to ensure that the greatest efficiencies are realized. One way or another, adaptive research is the future of our industry.
Ultimately, however, there is no technology or methodology that can substitute for clinical and scientific judgment. Pruning, go/no-go, and all other key decisions that make drug development challenging will remain the domain of human expertise, knowledge and judgment. Adaptive methods are very powerful tools that leverage current processes and demonstrate great ability to get more data, of better quality, earlier in the process than is otherwise possible, yielding knowledge – the foundation of decision-making.
Michael Rosenberg, MD, MPH is President and CEO of Health Decisions, Inc, a contract research organization and developer of the SmartPen™. Contact him at 1.919.967.2399 x229 or mrosenberg@HealthDec.com.