Risk-Based Monitoring: Where are the Benefits?

I got a call this morning from a frustrated colleague who runs the oncology franchise in Europe for a big pharma company. Despite a substantial investment in software that promised increased focus and efficiency through risk-based monitoring, my colleague is seeing neither. Did I have any suggestions?

Questions about why technology is not producing expected productivity gains have plagued clinical research for years. A previous post in this blog, Why Hasn’t Technology Paid Off in Clinical Research?, included a chart from a Clinical Trials Transformation Initiative paper showing the sharp divergence between projected cost reductions from new technology and actual cost increases. My colleague’s experience shows that risk-based monitoring products can be just as disappointing as earlier technology waves.

One of the underlying issues is that while technology for managing complex enterprises increases efficiency primarily by providing timely information that enables earlier, better decisions, our industry still overwhelmingly operates based on long decision cycles tied to periodic reports. The industry typically manages by looking backward, seeking to identify and correct past errors rather than predict and prevent future errors. Consistent with this management mentality, most risk-based monitoring approaches are rooted in lessons from databases of past trials and in error rates in data already collected.

While the FDA’s risk-based monitoring guidance stresses the importance of creating a monitoring plan that reflects the risks specific to each trial, in practice these plans are based on historical knowledge and remain fixed from planning stages through database lock. Few trial management systems and risk-based monitoring approaches are equipped to adapt to actual conditions observed during execution. The chief missing ingredients are immediacy of information about what is happening in the field and a forward-looking management approach based on predictive and preventative capabilities.

From conversations with my colleague in Europe, I believe he thought he had purchased a fully-functional risk-based monitoring solution. However, he overestimated the capabilities of the software package and greatly underestimated the additional elements required to deploy effective risk-based monitoring teams. This is not surprising given the enormous gulf between the expectations created by papers on risk-based monitoring methodologies and what is possible on a first study with a new software package and clinical staff who lack experience in risk-based monitoring.

Returning to my colleague’s request for suggestions, I’m afraid I could only advise him to be patient and to accept the limitations of the new software package in the hands of an inexperienced team. If my colleague had contacted me earlier, I would have offered several suggestions about selecting a risk-based monitoring solution. For example, it may be helpful to ask technology and solution providers these questions:

  1. Does the solution provide immediate, actionable information about site performance and data quality? Ensuring data quality requires current knowledge about what is happening in the field, including an understanding of problems and potential solutions. Dashboards and reports should provide a basis for action, not just a starting point for analysis. Periodic reports are a poor substitute for streaming information as a basis for monitoring and trial management.
  2. To what degree does the solution rely on detecting errors after the fact? Detecting errors is necessary to make corrections and ensure data quality but not sufficient to reap substantial efficiency increases from risk-based monitoring. Concentration on fixing errors after the fact ignores one of the central lessons of lean manufacturing – preventing errors is far more efficient than correcting them.
  3. What capabilities does the solution offer for predicting data errors? Can a risk-based monitoring solution identify upstream indicators of later quality issues? Predictive capabilities enable error prevention.
  4. Does the solution allow adjusting risk indicators during a trial? The risk indicators identified during study planning may not always prove the best indicators for the study team during execution. Does a solution allow adjusting indicators based on predictive value observed during the trial?
  5. Are you really looking at a potential solution, or just a step toward a solution? There is a tendency to think of every proposed methodology and software package as a risk-based monitoring solution. Risk-based monitoring requires a methodology, but does the available technology fully support the chosen methodology? If you are selecting a software package, will the burden of effective implementation delay realization of any benefits for months or years?
  6. Does the study team know how to make optimal use of chosen technology? Success of any risk-based monitoring approach depends on the people involved. If monitoring teams don’t know how to use a new package, how quickly can they come up to speed? There is more to it than learning the functions of new software. Effective risk-based monitoring requires a study team that has completed the cultural transformation from traditional 100% SDV. Monitoring based on 100% SDV minimizes the need for insight about which data is most important to study success and why. Risk-based monitoring does the opposite. For best results, every member of the study team must embrace the goal of ensuring the quality of the information that matters most to study success. (Among other benefits, when a CRO is involved, the shared focus on elements critical to study success allows highly specific alignment of the interests of the sponsor and the CRO.) Insight into potential threats to critical data and vigilance for unforeseen issues become important considerations in selecting monitors. Monitors develop enhanced capabilities as they gain experience with risk-based technology and processes.
  7. Will increased expenditures in other areas more than offset claimed cost savings from a risk-based monitoring approach? This is an excellent question and its urgency increases with the scale of required upfront technology expenditures. As noted at ClinOpsToolkit: “…if we are spending less money for on-site monitoring, we are likely inclined to spend additional money in other service areas to substitute the appropriate level of oversight and remotely verify the safety of subjects and quality of data.” The ideal is to find a risk-based monitoring approach that can both improve quality and decrease costs. Direct experience tells me that this ideal is attainable, but this is definitely a question to ask any technology or solution provider. Costs can add up, especially when an approach requires adding discrete products such as rules-based or central statistical monitoring packages to the cost of a CTMS with risk-based monitoring features.

Summing up, a main consideration when selecting a risk-based monitoring solution is not to mistake a part of the solution for the whole. Don’t confuse a methodology with a software package or a software package with the ability to put risk-based monitoring into practice. And be sure to evaluate claimed savings in relation to aggregate incremental expenditures to all vendors involved in the components of a proposed solution. Otherwise, like my European colleague, you may find the benefits of risk-based monitoring elusive.

Share Button

One Response to “Risk-Based Monitoring: Where are the Benefits?”

  1. Nadia Bracken on

    Dr. Rosenberg, thank you for a link back to the ClinOps Toolkit in your helpful list of tips in this article, which I just found tonight in my track-backs. I have now subscribed to your blog via my rss service and I will be sharing it with my audience and faithfully following along moving forward.

    This week I attended SCOPE conference in Miami and I heard a lot about clinical trial optimizing technology. When evaluating new products, I will be using this post as a checklist. During the final keynote, I heard an important reminder that we need to constantly verify that we are measuring health, not just data. To that point, if the metrics you collect in study trackers do not influence the corrective action you need to take, then don’t waste the time/money/resources to collect them in the first place. You say in point 1 to insist on real-time data, I will just add the word “relevant” to my printed-out version. Great piece – thanks again!


Leave a Reply

  • (will not be published)