Adaptive (Risk-Based) Monitoring

A group of senior executives from a company we work with today mentioned hearing pitches for adaptive monitoring from some of the big CROs they work with.  “Nothing there, but I did get the sense that these guys want to sell us a big project,” was the consensus.  We hear similar comments frequently, often during discussions of adaptive monitoring.

Risk-based monitoring is the closely related recent hot button, in the wake of the FDA’s guidance document issued in August 2011. It seems, however, that few really understand what this means or how to put the concept into practice. With risk-based monitoring, talk seems to be years ahead of action for much of the industry.

Adaptive monitoring is interesting to me because it’s a good example of a type of essential tool used for many years in other industries but not in clinical research. Just as with statistical process control in manufacturing, the drivers are the same: instead of a one-size-fits-all approach, it’s a question of how to better focus resources where they are needed and back off where they are not. The economic implications are profound.

 The best place to start defining adaptive monitoring is by correcting some common misconceptions.  Adaptive monitoring is NOT: 

  • An arbitrary decision to do less work (“we’re only going to do 75% of SDV”);
  • A way of applying pre-established criteria from earlier studies to the current one (“we’ve got patient profiles, and we’re going to adjust accordingly”);
  • Reliance on criteria that bear an unknown relationship to quality measures (“We’re going to let the CRAs decide what to monitor and what not to”).

I’ve heard people mistakenly cite all of the above “nots” as examples of why their monitoring approach is smart when they really show a failure to understand the core principle: adaptive monitoring is all about adapting your monitoring approach to the current study to ensure high quality under the study’s unique conditions.  

What adaptive monitoring IS: 

  • A flexible system of tracking quality and adapting monitoring effort according to the needs of the current study;
  • A systematic way of pinpointing and addressing quality issues specific to the current study;
  • Close attention to maximizing the monitoring that can be done from a central location rather than during a site visit;
  • Involvement of a strategically focused monitoring team, not a collection of isolated individuals, often junior and lacking management experience;
  • Use of continuous measures of quality that may differ from week to week
  • Systems and processes that enable immediate determination of the current status of quality measures without having to go to the site.

Beginning to get the picture? The reason that few groups (and none of the common commercial EDC systems) can really do adaptive monitoring is that they lack the essential elements required by each individual study: 

  • The ability to continuously track quality outcomes without having to visit sites, enabling continuous adjustments in monitoring focus and intensity;
  • Direct and surrogate measures of data quality that are available less than 24 hrs after the data are generated (not when entered in the system!);
  • Algorithms that continuously assess predictors of quality;
  • A sliding-scale approach that allows bigger adjustments in monitoring activities with more extreme measures of quality;
  • The ability to pinpoint specific quality issues, not being content with aggregate measures that aren’t helpful in identifying remedial steps;
  • Mechanisms for adjusting monitoring effort according to the quality of each type of data. 

Adaptive monitoring is not about gross adjustments such as monitoring a lower percentage of all data during a site visit. Systems must direct each monitor to focus exactly the right amount of attention on each type of data during remote monitoring and on every site visit. Expecting monitors to decide which data to monitor is asking for trouble. 

While the capabilities required for adaptive monitoring may be uncommon, the good news is that systems providing such capabilities do exist. They have been used and refined for a number of years. 

Does adaptive monitoring work? An unqualified yes, as demonstrated by quality measures in a large global registration study that used adaptive monitoring techniques from start to finish. That study’s quality measures exceed those of nearly all conventionally monitored studies (query rates of 0.2 per hundred fields). The study started with 100% SDV, adjusted dynamically based on actual quality measures and is now down to about 20% SDV. Despite the reduction in % SDV, there has been an improvement in overall quality and the highest quality in critical data such as primary endpoints and safety information. The key is adjusting the focus and intensity of monitoring where it is most needed rather than dispersing attention uniformly across critical and routine data. 

PS: Adaptive monitoring isn’t a standalone feature that works in isolation from other adaptive study operations. Adaptive operations are all about synergy across all types of activities involved in running a study—and that will be the subject of another post.

Share Button

Leave a Reply

  • (will not be published)