A Watchful Eye: Tracking the Trajectory of a Clinical Trial

European Biopharmaceutical Review

EBR_A serious clinical study and a casual game of catch differ in countless ways, but they do have one thing in common: a trajectory. In the clinical environment, periodic and continuous observation of trials is a requisite for success.

Spring 2011 - There is little we can learn about a ball’s trajectory from two still photographs, the first showing the ball leaving a child’s hand and the second showing a shattered window and damaged equipment in a chemist’s lab. The two photographs give us a general impression that a chance occurrence ruined a chemist’s experiment, but they say too little and come too late to allow intervention to prevent the damage. The situation would be very different if, instead of a conventional camera that takes still photographs, we were using a wide-angle digital video camera to record the game of catch in real time. We could see and follow the ball’s flight, project its trajectory, and shout a warning. Perhaps the second participant in the game of catch would leap high and catch the ball before it broke the window. Perhaps the chemist would hear the shout and shield the experimental apparatus.

The experiments that we call clinical studies typically unfold at multiple widely separated sites, not on a table in a lab. However, the industry’s difficulty in producing new drugs shows that current processes leave our studies as vulnerable to random events as the chemist’s apparatus. Monitoring is the single most important safeguard for clinical studies, and yet monitoring the traditional way is like trying to gauge the trajectory of a moving object from still photographs taken days or weeks apart.

Adaptive monitoring takes advantage of technology that provides a near realtime stream of data and performance metrics from investigational sites. Instead of snapshots of conditions in the field, monitors and study managers have the equivalent of full-motion video. They have the information they need to understand not just a site’s or study’s status at a point in time, but its trajectory. Monitors and managers thus have the opportunity to intervene if things begin to wander off course – as they always seem to, especially with complex and multinational trials. Adaptive monitoring allows us to do a much better job of achieving the three principal goals of monitoring: to ensure the accuracy of data, the proper execution of study procedures, and the protection of patient interests. Over time, the industry has developed rules of thumb for allocating resources to monitor studies. We typically decide on a fixed interval for site visits – often six or eight weeks. Site visits provide useful snapshots of site status, but at high cost. Monitors and monitoring, including travel for site visits, consume approximately one third of study costs.

We know what our monitoring budgets are paying for with the current approach. Monitors spend approximately 65 per cent of their time on source data verification (SDV), 12 per cent on data checks, 11 per cent on regulatory matters, nine per cent on coordinator discussions, and five per cent on drug accountability. Note that SDV and data checks together consume 77 per cent of monitors’ time. This means that monitors spend the majority of their time checking details meticulously. Monitors check the accuracy of what sites have already done to the exclusion of helping sites do things more efficiently in the future. Monitors have little opportunity to consider the two elements most central to study success: enrolment and data quality. Monitors function primarily as box checkers rather than managers who focus on the big picture.


Adaptive monitoring means tailoring the use of resources to a study’s evolving needs. Adaptive monitoring is enabled by a stream of timely information from the field and is part of agile clinical development. The central principle of adaptive monitoring is to allocate monitoring resources dynamically during studies based on current conditions at each site and for the study as a whole. This is a striking departure from the usual fixed allocation of resources, such as a site visit every six weeks regardless of site performance.

Adaptive monitoring is inherently dynamic and relies on knowledge of current conditions and trends. The hallmarks of adaptive monitoring include:

  • Availability of a stream of near realtime information on site activities
  • Continuous remote monitoring from a central office, enabled by the stream of timely information
  • Dynamic allocation of site visits and other resources
  • A team approach that blends continuous remote with needs-based on-site monitoring
  • Increased focus on prevention rather than correcting problems after the fact
  • Reduction or, where technology allows, elimination of source-data verification


Each of these hallmarks makes a distinct contribution to adaptive monitoring. The following paragraphs explain how. Table 1 contrasts important characteristics of traditional and adaptive monitoring.

Table 1 - Comparison of key characteristics of traditional and adaptive monitoring

A Stream of Timely Information
The availability of a stream of timely information on site activities is essential for adaptive monitoring. This provides the basis for adapting monitoring activities, including the frequency of visits, the focus of attention during each visit, and the allocation of monitoring resources at study conclusion to ensure timely closeout. Ideally, site data becomes available to study personnel at a central site immediately, even before a patient leaves a site after an office visit. The flow of timely information must include not only patient data, but also performance metrics that track the status of all major operations at each site. Such a flow of data enables monitors to play a significant managerial role for their sites. When critical source data is electronic, it can become available to monitors in a central office almost immediately.

Continuous Remote Monitoring
In traditional monitoring, each monitor assumes responsibility for specific sites and visits them at intervals. Continuous remote monitoring allows additional sets of eyes to track activities at each site. More experienced members of a central team may identify problems that a less experienced monitor overlooks. The individual who makes site visits understands that other members of the study team are tracking performance, an incentive to provide best efforts. Thus, remote monitoring improves the supervision of sites and guards against vulnerability to poor performance by any individual.

Dynamic Allocation of Resources
Study managers can allocate site visits based on need as revealed in current data and performance metrics. Emergence of a serious problem at a site may require an immediate phone call followed rapidly by a site visit. The monitor performing the site visit arrives with an understanding of likely issues and known problem areas. For example, if results from a study procedure such as a subjective assessment are inconsistent over time or differ markedly from results at other sites, the monitor can look into how the site is performing the assessment, including whether the personnel assigned are qualified, received appropriate training in administering the assessment, and so on. Sites where metrics reveal issues receive earlier and more frequent site visits and more intense management scrutiny, enabling early correction of site issues. Conversely, sites that have outstanding performance metrics and promptly submit data of high quality may need fewer site visits. In some cases, this can result in fewer total site visits and lower travel expenses for an entire study. Study managers can allocate additional monitors to close out sites with greater numbers of unmonitored data fields remaining, preventing a lagging site from delaying database lock for an entire study.

Remote and On-Site Monitoring Attention
The quality of traditional monitoring is highly dependent on the performance of the individual monitors. A lapse in one monitor’s performance may cause serious problems, especially if the lapse is not discovered until discrepancies become apparent during attempts to lock the study database. Using staff in a central office to perform continuous remote monitoring enables a team approach that brings greater expertise to bear and reduces vulnerability to poor performance by any individual field monitor. A second set of eyes increase the likelihood of identifying errors. Each site benefits from a team of monitors and a blend of continuous remote monitoring with on-site monitoring based on need. The benefits from such teamwork, both at the site level and study-wide, can be enormous.

Focus on Prevention Traditional site monitoring consists primarily of meticulous checking of information and correction of individual problems after the fact. A stream of timely information and performance metrics allows a higher-level view of site performance. Metrics may reveal a pattern of error at one site in the use of a specific case report form. The monitor can investigate, identify the source of the problem and intervene to prevent recurrences. Greater efficiency in resolving individual queries provides notable benefits, but these pale beside the benefits from reducing the number of problems to fix and queries to resolve. Timely data and performance metrics enable a preventative approach that reduces the time, effort and expense required to ensure high data quality and satisfactory site performance.


There are three options for reducing or eliminating source-data verification. The first is to eliminate source-data verification by collecting data, where possible, on electronic source documents. Digital pens and tablets enable direct collection of data on electronic CRFs and this provides several advantages as there is no possibility of transcription errors as site personnel type in data originally entered on paper forms. Source data, including images of CRFs, becomes available immediately to remote monitors. This accelerates the review of data to identify issues and generate and resolve queries. The greatest benefit of electronic source data is obviating the need for traditional source data verification for all the data collected on electronic CRFs. It is important to understand that this does not involve reducing scrutiny of study data. It simply eliminates the comparison of some electronic study data with original paper records. There are no original paper records for such a comparison, and there is no possibility of discrepancies between a paper document and data retyped from the document.

Figure 1: Diagram of general process for dynamic allocation of source-data verification efforts

One approach to reducing source verification is to prioritise data and to allocate verification activities accordingly. For example, a study could verify 100 per cent of data classified as critical but verify a sample of non-critical data. A second approach to reducing source verification is to focus SDV efforts through the use of algorithms based on factors that affect quality of data. Algorithms can increase the intensity of SDV where problems exist – intensive care for endangered data – and reduce the intensity elsewhere. The objective of the algorithm is to predict how much scrutiny certain data requires. Similar algorithms can provide guidance on managerial focus. Defining the algorithm is key. Where SDV is concerned, the assessment of data quality depends on several factors, including query rates. An algorithm may weight each factor. Figure 1 illustrates a process for developing such an algorithm.

Regulatory Considerations
Neither the EMA nor the FDA dictates how to perform monitoring. Sponsors are charged with ensuring data quality. So long as we meet this requirement and we are confident that any regulatory checks will confirm that we have done so,we have a degree of freedom in how we monitor. This freedom must be understood as subordinate to the responsibility to ensure data quality. When considering implementation of any new monitoring techniques, including those described in this article,we suggest first discussing plans with regulators.


The development of a novel biologic by a young biotech company illustrates the way adaptive methods are providing a basis for more prudent use of development budgets and more productive relationships between sponsors and CROs. The novel biologic is a treatment for an indication with relatively short safety and efficacy outcomes. There is a well-defined path to regulatory approval that facilitates programme planning when the biologic is ready for Phase 2 testing.

With investment funds scarce, the biotech’s strategy is to identify a venture capital partner that will take advantage of adaptive techniques to reduce the time required for a combined proof-ofconcept and dosing study. The achievement of predefined success measures will trigger an option granting the venture partner ownership of the product at an agreed price that escalates with more favourable Phase 2 results. The main adaptive components are (1) rolling two traditional components (PoC and dosing) into a single study, and (2) producing an agreed-upon signal differentiation between doses as well as a predicted profile for the molecule. Part of the agreement is that the biotech will conduct Phase 2 testing not to produce statistically significant results but to identify signals that justify continued development – signals that determine whether the test molecule meets the agreed target profile.

The biotech company and a CRO with specialised adaptive capabilities also design a plan for rapid transition into Phase 3 if Phase 2 produces favourable signals.

The plans for Phase 2 call for:

  1. Six initial dosing arms (because of potential safety issues and to reduce the risk of guessing wrong about the best doses to test)
  2. The use of predictive modelling – a technique to estimate the probability of the success of each dosing arm and the overall probability of the success for the test molecule based on results for all arms
  3. Rapid pruning to the four most promising treatment arms
  4. More gradual pruning to eliminate two additional treatment arms and identify the optimal dose for Phase 3 testing
  5. Monitoring for a key signal for the transition to Phase 3, a predictive probability of success for the test molecule more than 90 per cent at meeting target profile
  6. Continuous updating of predictive probabilities of success and other key results with the investment partner through automatically updated desktop widgets
  7. A goal to start Phase 3 testing within six weeks of completing Phase 2
  8. If the study should be completed early, continued monitoring of Phase 2 patients for safety issues for a specified period appropriate for the indication and class of molecule


As data accumulated during the execution of the Phase 2 study, pruning proceeded as hoped, rapidly eliminating two treatment arms that were clearly less promising. Over a more extended period, accumulating data justified reducing the number of arms from three to two and finally to one. Although testing more arms initially increased costs, pruning the number of treatment arms early meant that the cost was only slightly greater.

Both safety and efficacy results were favourable. About halfway through the planned maximum timelines, the predictive probability of success of the test molecule surpassed 90 per cent, and confidence intervals were tight enough to comfortably predict eventual success.

As a result, the study took less than half the time of a conventional study that carries four treatment arms for the full duration of a study with a goal of producing statistically significant results. Furthermore,work quickly started to finalise the design for Phase 3 testing and to select and prepare investigational sites, reducing the time required for the transition to Phase 3.

Because of good fortune in having clear early signals separating safety and efficacy profiles on different treatment arms and a clear signal of likely success for the molecule, selecting the optimal dose for Phase 3 testing was easy. However, it took three months before the first patient was enrolled for Phase 3 rather than the desired six weeks. Regulatory approval was the main limiting factor. The sponsor completed the regulatory submission for the end of Phase 2 two weeks after last patient’s last visit. The sponsor also provided updates of safety information shortly before meeting with regulators.

Meetings between the sponsor and the investment partner started on completion of Phase 2. Conditions for exercising the option had been met. The sponsor handed the in-progress Phase 3 study to the partner, who signed a cheque and owned the molecule.


The use of adaptive techniques reduced Phase 2 timelines by six months and allowed the sponsor to use venture funding, which would normally take the molecule only through proof-of-concept (PoC) to complete both PoC and dosing and into Phase 3. The most important element was an adaptive strategy that defined an early indication of success and allowed making informed dosing decisions much earlier than in typical programmes.