Commentary: Adaptive Trials in Practice

Bio-ITWorld eCliniqua

MG_eCliniquaLOGOBy Michael Rosenberg

November 19, 2007 - While adaptive methods have generated great excitement for their ability to bring clinical research to new levels of efficiency, there remains considerable confusion about putting the principles that underlie this approach into practice. This is understandable in light of the fact that discussions of adaptive methods have focused on technical issues such as statistical methodology.

A clear understanding of practical and operational issues is equally critical for successful use of any adaptive technique, whether for strategic (design) or tactical (operational, such as enrollment) aspects. I will address some of the more common confusions encountered when implementing adaptive processes.

First, confusion often exists about what adaptive techniques to use. Perhaps the most powerful component of the adaptive approach is the ability to make midcourse corrections that assure meeting study goals rather than waiting until end-of-study analysis reveals whether the trial has met its goals or not. Even when fortune smiles, there is virtually no chance of getting all the key parameters of a study right at the planning stage — even an element as central as study size rests on several unknown variables (magnitude of effect, performance of comparator, dropout rate, variability of data, and others).

Adaptive techniques provide the ability to adjust each of these parameters based on study experience to date. Most often, data is examined at midcourse, when parameter values have stabilized. The most powerful aspect of the adaptive approach goes beyond an interim look to stop the trial if it is futile. It should also allow adjusting key parameters to ensure that informational goals are being met. For example, tracking the actual size of the treatment effect observed during the trial makes it possible, subject to careful operational procedures, to refine the guess used during planning and adjust sample size to ensure adequate statistical power.

Second, there is confusion about the infrastructure required for the adaptive approach. The infrastructure must be able to provide more data, of better quality, earlier in the process. Is tempting to believe this need is met by Web-based electronic data capture (EDC), which is far superior to the pen-and-paper data capture that still dominates the industry. However, EDC involves two transcription steps and delays of days to weeks before data are available.

Even more important, current commercial EDC systems provide little or no ability to manage key performance metrics such as query rates or enrollment figures. Other systems, including both electronic pen and some faxback systems provide machinereadable data entry and immediate validation and reporting. The former system is linked in a manner that tracks a broad variety of case report forms and performance metrics, and can have this information on a sponsor’s desktop before a patient even leaves the site. Web-based EDC lacks such capabilities.

Third, there is some confusion about process change required by adaptive approaches. There is more to it than upgrading technology. Each sponsor must internalize new processes focused on continuous assessment and refinement through timely decision-making based on key performance indicators. Achieving this involves more than a small group of biostatisticians; the entire company must embrace the new approach rather than the old linear model of one separate study after another.

If you don’t know one site is enrolling faster than another, or has a lower query rate, or why one interviewer is consistently making an error with one set of questions, you are not optimizing adaptive techniques. These extend to scheduling monitoring visits based on need — query rates and number of accumulated unmonitored fields — rather than arbitrary schedules. Even the knowledge that one good monitor can verify twice as many fields as a junior monitor can prevent the waste of expensive resources.

Finally, there is a tendency to think of the IT requirements for adaptive methods in terms of upgrading individual technology components. What is actually needed is an integrated IT infrastructure capable of continuously providing timely and accurate information and performance indicators. Study functions — data and performance metrics, data and query management, randomization — must all be integrated in a single system easily accessible via the Web or intranet.

Any study should start with key performance indicators that enable members of the study team to see the individualized information they need in real time. Middleware to digest and report is even more important than efficient data capture. A key benefit of adaptive infrastructure is to provide the standardization that enables a high-velocity program to occur despite inevitable shortcomings of some study team members. For example, managers are able to review performance indices for staff that they supervise; a monitor who consistently writes far fewer manual queries may require additional training or other intervention; in a word, management!

In short, adaptive studies require infrastructure and processes that are above the standard of conventional trials. While the adaptive approach is more demanding, it is also much more productive and rewarding, not only for the people who execute studies, but also for sponsors who are critically dependent on the success and efficiency of clinical development programs.

Michael Rosenberg MD, MPH, is president and CEO of Health Decisions Inc., a clinical research organization specializing in technology and processes of adaptive research. He can be reached at mrosenberg@healthdec.com