Health Decisions CEO Dr. Michael Rosenberg explains how adaptive monitoring can be a better way of tracking investigative site quality and performance.
August 2010 - You never thought it possible: the clinical trial under your organization’s management is coming under fire for inadvertently overdosing pediatric patients for 20 days running.
How could this have happened to you? You would never tolerate any practice that could lead to such a result. From protocol development to assignment of monitors and investigative site visits, you had insisted on close adherence to standard industry practices. You had budgeted for an ample number of monitors and site visits, and the visits had taken place.
Nevertheless, you’ve just received a warning letter from the US Food and Drug Administration saying:
“You failed to ensure proper monitoring of the investigation [21 CFR 312.50].”
“You failed to ensure that the investigations were conducted in accordance with the general investigational plan and protocols contained in the IND [21 CFR 312.50].”
The FDA’s letter provided troubling details, citing your study for:
- repeatedly overdosing pediatric patients, sometimes for as long as 20 days;
- failing to create required individual titration schedules for dosing each patient;
- failing to have qualified people administer the test drug;
- failing to document that the test drug was maintained in the required temperature range; and
- ineffective study monitoring and site management, resulting in glaring oversights such as missing titration schedules and test drugs shipped with temperatures outside the range allotted by the protocol.
Perhaps most galling of all, your organization had previously reviewed study data, identified many of these issues and ordered vigorous corrective measures. You had suspended enrollment at some sites, instituted retraining for all sites and monitors and increased drug accountability. You had done all this long before receiving the warning letter and learning that the problems had continued despite corrective efforts.
The above case is a composite of recent FDA warning letters to more than one major pharma company. However, it accurately depicts the situation confronting shocked programme and study managers in 2010. What went wrong in their studies? Where did they depart from the standard industry practices intended to prevent such problems? When they identified problems internally and ordered corrective measures, why did problems continue?
In all likelihood, these major companies did not depart from standard industry practices. Indeed, they probably followed standard industry practices to the letter except for occasional human failings along the way. The problem was depending on standard practices to keep studies in compliance. Unfortunately, these standards are often too weak to prevent problems from slipping through the cracks.
Following standard study management practices often means:
- not receiving enough information from the field to manage sites effectively;
- receiving information too late to allow effective management response to changing conditions in the field;
- lacking the ability to track the implementation of corrective measures;
- having no idea that your study is suffering from human failings on the front lines, such as monitors giving scant attention to the dosing of children with an experimental medicine; and
- leaving senior managers in the dark until it is too late to intervene and fix problems.
In summary, standard industry practices lull study managers into complacency, but leave critical information gaps that expose sponsor companies to major risks. Increased FDA scrutiny of site performance is exposing these weaknesses.
The root of the problems
There are several reasons why standard study management practices result in failures that lead to FDA warning letters. First, they tend to address compliance issues only in terms of surface considerations such as the number of monitors assigned and the frequency of investigative site visits. Such considerations are important but they are only well-meaning steps towards a potential solution.
The point of having monitors and scheduling site visits is to obtain information that ensures compliance. However, the most important type of information for ensuring compliance is timely management information. Typical clinical studies do an inadequate job of collecting this information, and thus do an inadequate job of ensuring compliance. By depending largely on periodic site visits, these studies often rely on information processes that resemble those of old library systems in which people had to venture into stacks to check the accuracy of paper-based card catalogs. Not only was this process manual and tedious, but the information listed in card catalogs was often out-of-date or otherwise erroneous. Modern libraries have a far superior flow of timely management information. They use tracking systems to provide continuously updated information about the status of millions of books.
But in clinical research, although we are not examining an inventory of inanimate objects such as books but testing experimental products on human subjects, we rely on standard practices that still largely correspond to those developed in the age of card catalogs. We think of study management primarily in terms of sending people out to remote investigator sites at set intervals to find out what is going on. To manage effectively, we should be thinking in terms of continuously collecting and tracking key performance metrics. In the case example above, such information might have told study managers enough to determine whether titration schedules had been created, were being used and were working as intended.
In part, our approach as an industry to the management of clinical studies may stem from the necessary focus on following FDA guidelines. Because this compliance is so necessary, we may think that it is sufficient. However, the guidelines are not concerned with operational efficiency or the flow of information essential for effective study management. Those concerns are not the FDA’s responsibility. Nevertheless, FDA guidelines condition the way in which we think about managing studies.
The guidelines are extremely general: “ensure proper monitoring”1, “visit the investigator at the site of the investigation frequently enough to assure that…”2. The FDA goes beyond this to identify factors to consider for each study in determining the number of monitors required and the expertise and training that the monitors should have, including the number of investigators, the number and location of sites, the type of product, the complexity of the study and the nature of the disease or health condition. The FDA also calls for written monitoring procedures, pre-investigation site visits, periodic site visits, review of subject records and keeping records of site visits.
Such language is all about selecting people to serve as monitors, deciding how many monitors are needed, pre-determining intervals for sending monitors out to check up on sites and keeping records. Study management tends to focus on the same areas, devoting scant attention to obtaining timely information on performance or deciding what information must be available at which points in the study in order to enable effective study management. Because we have no way of knowing what is going on unless we send people physically to sites, that is what we do. But those people are often focused on “checkbox” items and may lack the experience to look up from checklists and understand what is really going on at the site.
Ensuring high costs but not compliance
Consistent with the language in FDA guidance, sponsors tend to think of monitoring as entirely about people and site visits. Hence, sponsors are likely to think that the best way to prevent the kinds of site issues that lead to warning letters is to hire more monitors and visit sites more frequently – to do more traditional monitoring at higher cost. Adding people, travel and cost can increase knowledge about what is going on at the site level. However, this solution is often both cost-prohibitive and as likely to multiply the original problems as to fix them.
High monitoring costs are already a serious issue in many trials. As much as two-thirds of the cost of a large Phase III clinical trial may go to monitoring and site management, and resolving a single query can cost as much as $3503. Monitoring costs have become such a big issue that there are serious proposals to cut costs by reducing the number of Case Report Forms, the number of CRF pages, the amount of data collected, the number of monitoring visits and the amount of site payments, all while maintaining the same number of patients and sites4,5. Proposals to make such changes merit serious consideration in light of the needs of each study. However, because the data collected must be both accurate and comprehensive enough to provide a basis for a regulatory filing, reducing costs by reducing the amount of data collected requires the greatest care.
Furthermore, regulators have the last say and they are, if anything, asking researchers to collect more data, not less. The consequences would be unpleasant if, after collecting less data and monitoring less frequently in order to cut costs, an approved drug wound up causing problems that could have been detected by the excluded data. Sponsors must also consider the potential costs of addressing safety issues that go undetected until a drug has reached a large population. The strategy of reducing trial costs by collecting and validating less information may well increase the risk of failing to detect potential post-marketing liabilities, which can cost into the billions of dollars. For our discussion, the key point is that monitoring costs are already so high that knowledgeable researchers are calling for reduced monitoring efforts, not adding more people and site visits.
Moreover, adding monitors and site visits does not necessarily bring better results. A heavy load of highly detailed monitoring work could strain the concentration of even the most conscientious employees. Monitors are expected to detect every discrepancy when issuing and tracking queries and comparing database contents with source documents. Human lapses in performance are inevitable in this kind of work. No matter how many monitors we hire, errors are bound to creep undetected into the study database. Furthermore, adding more monitors increases the challenges of managing the workforce and enlists a greater variety of people with varied work habits and performance levels, leading to greater variability in the data itself. There are inherent ironies as well as high costs in trying to minimise human error by involving more humans.
The extravagant resource requirements of traditional monitoring suggest the need for a different approach rather than more of the same. This is especially true when, despite all the resources consumed, traditional monitoring fails to provide the desired results, as in the examples above.
Ensuring high compliance at reasonable cost
There are four elements to ensuring high compliance at reasonable cost: leveraging technology; implementing a systematic team approach; adapting the monitoring process based on what we learn during the study; and developing more effective monitoring plans.
The role of technology
Work processes that take full advantage of modern computing and communications technology can transform both monitoring and study management by providing a continuous flow of timely information on study status. This allows both monitors and managers to know far more at any given point about site and study status than has previously been possible. The technology involved is not exotic. Given well-conceived processes, commonplace computers and internet communications can take full advantage of electronic data capture (EDC) and site management software to ensure that study personnel have a full knowledge of site performance.
It is important to recognise that EDC does not automatically provide such a flow of information. In most cases, EDC captures patient data through manual keyboard entry, so the timeliness of that data depends on when site personnel can find the time and patience to work data entry into their schedules. In addition, EDC systems usually focus on patient data to the exclusion of most performance metrics that are crucial to successful study management. However, there are EDC systems and site management software that do provide the timely information flow required for effective study management, though contractual terms with sites as well as incentives may be necessary to ensure the immediate availability of this information.
Affordable, effective team monitoring
One of the greatest benefits of access to a flow of timely information is that monitors can perform many traditional tasks remotely. Remote monitoring allows individual monitors to stay current on the status of their assigned sites, and, more importantly, enables a team approach to monitoring that reduces the sponsor’s dependency on the performance of individual monitors. Any approach to monitoring in which more than one person tracks the status of each site encourages everyone involved to monitor each site with greater care. It also allows each monitor to detect the oversights and errors of other monitors, protecting the sponsor from vulnerability to individual failings.
However, implementing the team approach by simply assigning more monitors to track each site and make more frequent site visits quickly drives costs beyond acceptable levels.
The cost-effective way to reap the benefits of team monitoring is to blend continuous remote monitoring with periodic on-site monitoring. Indeed, continuous remote monitoring can keep study managers informed enough to reduce on-site monitoring and associated costs. Such reductions would be unwise unless justified by the flow of information. But often, the flow of site performance information can justify it, allowing study managers to adapt the allocation of monitoring resources.
Adaptive monitoring: dynamic and needs-based
Team monitoring that blends on-site and remote monitoring opens the door to adaptive monitoring, which can both reduce costs and improve data quality. Furthermore, adaptive monitoring can focus more and earlier monitoring effort where it is needed, bringing underperforming sites up to standard (or, if that is not to be, allowing study managers to replace unsatisfactory sites). Adaptive monitoring replaces fixed schedules and rigid adherence to pre-ordained plans with a data-driven, needs-based approach. Remediation of errors remains important, but prevention of errors becomes possible and is far more cost-effective.
With the availability of near real-time patient data and performance metrics, adaptive monitoring allows dynamic allocation of resources and attention based on actual site performance. This extends to all monitoring activities, including site closeout. By devoting more monitoring attention at closeout to sites with the greatest number of unmonitored fields, study managers can ensure that an entire study closes out on schedule. The worst-performing site no longer delays closeout for an entire study.
Adaptive monitoring focuses primarily on the accuracy of patient data. However, it is also important to link data capture technology with an integrated study management system that automatically generates timely performance metrics for sites and entire studies. Performance metrics allow close tracking of site performance and are essential both for effective remote monitoring and for enabling the adaptive approach. Examples of performance metrics that provide a basis for detecting site problems and allocating monitoring resources include:
- query rate, by interviewer and site;
- mean time from patient visit to data submission;
- mean time to query response;
- mean number of unresolved queries;
- most queries by fields, forms and range checks;
- number of protocol violations; and
- number of adverse events and serious adverse events.
Study managers and site monitors can become astute and subtle at using performance metrics to identify potential problems as they first emerge. However, at the most general level, the metrics have obvious applications that provide immediate benefits. Sites with a high query rate or a disproportionate number of unresolved queries require early monitoring attention. Extensive use of such performance metrics has shown them to be excellent predictors of later issues in data quality and delays in site closeout and database lock. Furthermore, experience with performance metrics allows the development of algorithms that make systematic use of quality indicators to identify problem areas. We can then look much more intensively at such areas. Monitors know where it is most important to focus their attention.
The availability of such metrics expands the focus of monitoring from resolving individual queries and addressing isolated problems to identifying and correcting the underlying causes of poor performance. This allows monitors to prevent recurring issues that increase cost and stretch timelines before they can compromise study efficiency or integrity.
The benefits of such performance metrics extend to higher level managers as well, providing a baseline for assessing the performance of individual sites and exposing issues that affect performance study-wide. If one CRF field has a surprisingly high query rate across all sites, the underlying problem could be poor training, ambiguous instructions or confusing CRF design. Fixing such problems early can greatly increase efficiency.
Risk factors and effective monitoring plans
Monitoring plans play as important a role in ensuring compliance as technology and monitoring methods. Furthermore, effective technology, remote monitoring and a team-based approach allow greater flexibility in creating effective monitoring plans.
Study managers should consider three types of factors in developing a monitoring plan. The starting point includes basic factors, many of which are suggested by FDA guidelines. Other important factors usually receive less consideration in formulating monitoring plans. These include factors associated with teamwork, which is the essential defence against individual human failings and organizational dysfunction. Surprisingly, the most neglected factors in formulating monitoring plans are often technological and informational considerations. Informational considerations include not only the types of patient information to be monitored, but also the information needed to track site performance and manage sites effectively.
Another major consideration that is often overlooked is the ability of all technology used in a study to collect such information and deliver it when needed. All three factors are vital in formulating a monitoring plan, but the informational factors are the largest determinant of a study’s ability to ensure compliance in the field. At a minimum, sponsors should consider the following factors in formulating a monitoring plan, broken down in the categories identified above:
- Distance to sites. Distance increases risk. Travel costs are higher and budgetary concerns may dictate less frequent monitoring visits;
- Intervals between on-site visits. Longer intervals may increase risk;
- Known safety issues with test drugs (such as toxicity of chemotherapeutic agents);
- The vulnerability of subject populations. Paediatric patients and patients with issues of competency (Alzheimer’s, dementia) increase risk;
- Impairments of subject populations. Impairments in mobility, affective disorders and other considerations may affect patient compliance;
- The number and complexity of inclusion/exclusion criteria. There is greater risk of screening mistakes and greater temptation for sites to interpret criteria liberally to meet enrollment goals;
- The number, novelty and complexity of study procedures. Such considerations contributed to the FDA warning letter cited at the outset: requirements for special handling of the test drug, a separate titration schedule for each patient and administration of the test drug by a person with very specific qualifications; and
- Life-threatening health condition. If the health condition itself jeopardises patients before administration of the test drug, risk for the study as a whole increases.
- Reliance on a single person to monitor each site. Lapses in individual human performance are inevitable;
- Information silos. The fewer eyes that review data, and the more isolated the data from other information, the greater the risk. Both automated cross-checks and a second human review break down information silos and decrease risk; and
- Linguistic and cultural barriers. Miscommunication between sites, monitors and study managers is always an issue, and linguistic and cultural differences exacerbate the problem.
Technological and informational factors
- Outdated data-capture technologies, especially paper-based processes. Delayed receipt of data increases risk;
- Lax data-entry performance. Regardless of how rapidly technology could capture and transmit information, what matters is how quickly sites actually make the data available to the study team. Delayed data entry increases risk;
- Failure to collect and review data directly associated with specific risks. If there is a risk of overdosing patients because of complex individualised titration procedures that take body weight into account, failure to capture dosing information and subject weight at the time of site visits, or to transmit the information rapidly, increases risk;
- The absence of range checks. Failure to perform appropriate range checks on dosing information, such as checking dose:weight ratio, increases risk;
- The absence of cross-checks. Failure to compare the amount of study drug reported as administered with the amount removed from inventory increases the risk of overdosing; and
- Delays in data validation. Information must not only be as timely as possible, it must also be correct as soon as possible.
Affirmative steps to ensure compliance
Having identified many of the greatest risk factors for non-compliance, we can now think in terms of affirmative steps to ensure compliance. We can monitor and manage sites based on a plan that begins by taking specific risks into account, then identifies the information flow required to track and manage those risks and finally considers how the study will ensure the flow of such information to monitors and managers. Consideration of the risk factors that should shape monitoring plans leaves no doubt about the four keys to ensuring compliance in clinical studies. As stated above, the keys to ensuring compliance are:
- technology that provides a continuous information flow and allows effective remote monitoring;
- a team-based approach that blends continuous remote monitoring with periodic on-site monitoring;
- adaptive monitoring strategies that adjust dynamically based on current information about site performance and study-wide issues; and
- a monitoring plan that seriously contemplates all the risks of the specific study, including:
- basic risks;
-points of dependency on individual performance; and
-technological and informational factors that affect risk.
It all adds up to an adaptive, team-based approach that blends continuous remote monitoring with periodic on-site monitoring, with the study team executing a plan tailored to each study’s unique requirements and risks. Rather than imposing high costs without ensuring compliance, this approach can ensure high compliance at reasonable cost.
- 21 CFR 312.50, General responsibilities of sponsors, www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfcfr/CFRSearch.cfm?fr=312.50
- FDA, Guideline for the monitoring of clinical investigations, January 1988, www.fda.gov/downloads/ICECI/EnforcementActions/BioresearchMonitoring/UCM133752.pdf
- Malakoff D, Spiraling costs threaten gridlock, Science, 2008;322:210-3
- Eisenstein et al, Reducing the costs of phase III cardiovascular clinical trials, Am Heart J, March 2005, 149(3), 482-8
- Eisenstein et al, Sensible approaches for reducing clinical trial costs, Clin Trials, 2008, 5, 75-84