I’ve repeatedly emphasized the importance of timely, actionable information in increasing the efficiency of clinical trials. When I do talks, I often discuss this in some detail but readers sometimes ask exactly what sorts of information I’m talking about. So let’s look at some key performance metrics for site performance and then move on to enrollment and zero in on how a report of one important metric allowed a sponsor to make a decision that kept a huge phase III trial on track.
The first general issue is improving site performance. These are the lifeblood of our trials, yet we often give them little or no help in improving their performance. One of the most central notions is that you can’t afford to wait until the next site visit to find out a site is struggling. Inbetween visit you need information that is continuously updated, always available and readily interpreted, and you also can’t afford to have a single individual (often inexperienced as a manager to boot) as the source of information. A common mistake is to think that data alone from sites tells you what you need, but that misses the most important management information. Some examples of performance metrics that help detect site problems:
- Query rate, by interviewer and site;
- Mean time from patient visit to data submission;
- Mean time to query response;
- Mean number of unresolved queries;
- Most queries by fields, forms, and range checks;
- Number of protocol violations;
- Number of adverse events and serious adverse events.
There’s nothing exotic or profound about the metrics shown. They are important simply because not having them might kill your study (and you in the process!). Sites with a high query rate or lots of unresolved queries demand immediate attention ranging from more frequent communication and coaching to an earlier site visit to retrain and motivate or, if things don’t improve fast, even outright site replacement.
Now let’s look at how a report on a single important enrollment metric, screen fail reasons (below), led to a decision that kept a global phase III study on track. A report just like the one below enabled the study team to see at a glance just 8 weeks after FPI that lots of subjects were screen failing because of the allowable cycle window.
Like the column on diastolic and systolic pressures in the figure shown, the column for screen fails due to cycle window jumped off the page of the report automatically generated in the contraceptive study. Nobody had to do anything to generate the report of screen fail reasons – it was always available and continuously updated in the trial management system. Nobody had to collect data or run an analysis. The facts were always staring the study team in the face.
Following quick consultations with the lead PI and medical monitor, the sponsor sought and received a protocol amendment. That change rapidly brought enrollment up to targets without compromising the study’s ability to assess the effectiveness of the contraceptive.
This is one example of the power of decision-making based on timely, actionable information. Over the course of a single trial, actionable information may prompt hundreds of decisions that reduce timelines and cost or improve data quality. Most decisions bring small improvements individually, but the cumulative effect on the study can be huge. Multiply this times all the studies in a program and your company is looking at the equivalent of a transformative increase in the R&D budget.