Report offers tips to minimize risk and ensure trial success

MedicalDeviceDailyBy Amanda Pederson, Staff Writer

February 2010 - According to a new white paper from Health Decisions (Durham, North Carolina), a global clinical research organization (CRO), “if a drug works, there should be no reason for a clinical trial to fail,” and the same could easily be said of a device trial. This may seem like common sense, according to the paper, “but many potentially valid drugs are denied FDA approval because of flawed clinical data that undermines the validity of overall efficacy and safety findings.”

The main goal of a risk reduction strategy, according to the Health Decisions paper, is to “eliminate the possibility of a program failing for any reason other than proven lack of efficacy or safety.”

The report, “What you don’t know can hurt you,” identifies five essential elements for minimizing development risk and ensuring a trial’s greatest chance of success, according to Health Decisions. Those elements are: track and analyze data management metrics to ensure accuracy; provide managers with continuous information to make decisions; standardize training and processes at all sites; use trusted, certified partners; and use adaptive designs when appropriate.

“By continually measuring, monitoring and evaluating metrics, study teams are able to generate information with which they can make timely and relevant decisions,” Mike Ford, head of data management at Health Decisions, explained to Medical Device Daily via email. “The ultimate goals are to identify an issue as quickly as possible, take corrective action in a timely manner, and put in place the appropriate team to more effectively allocate the study’s resources to areas where it will be most beneficial, thereby saving time and resources, resulting in more cost-effective research, and ensuring that data is as accurate, well-managed, protocol-compliant and ordered as possible so that there’s no risk of regulatory rejection unless the device (or drug) candidate is simply unsafe or inefficacious.”

The paper points out that one undetected issue can be the difference between a new product getting to market and an FDA rejection letter. “Risk comes with the territory in any type of research, but by carefully planning your programs and incorporating mechanisms to minimize risk, you can be confident that your program has the highest chance of success,” according to the report.

The first of the five essential elements for minimizing development risk and ensuring trial success, according to the paper, is tracking and analyzing data management metrics to ensure accuracy.

“This can involve tracking any number of metrics from CRF submission and query responses to frequency and type of adverse event,” Dan Cormican, a project manager with Health Decisions, told MDD via email.

The white paper uses the example of how a sudden spike in query rates might indicate that test procedures are being administered inaccurately or study case report forms (CRFs) are being improperly handled. An effective data management system will flag this trend break and alert managers to investigate the root of the problem, according to the report. Perhaps the issue is personnel turnover or a staffing problem at the site, leaving an untrained staff member to execute study functions, the paper suggests.

“Without the proper attention to metrics and data trends, this type of deal-breaker issue could easily go undetected until it’s too late and/or expensive to make amends,” Cormican said. “Another scenario is the likelihood of unexpected CRF submission and query trends revealing common user error on behalf of an individual investigator, site, or group of sites. This would flag the need for immediate retraining to prevent the introduction of errors or inconsistency into final patient data. One of the first rules of thumb is the higher the query rate, the higher the risk of flawed data.”

Cormican also used a device-specific example of the use of metrics/patient data to reduce risk and improve its chances of success: a device trial looking for specific improvement on a visual analog pain scale could monitor progress during the trial to make timely decisions on study design if it is warranted by the data, he said.

The paper also notes as an essential element for minimizing risk and ensuring trial success that managers need relevant metrics, patient data and analysis as quickly as possible to make important decisions about the course of their program. “When information is unavailable, or comes too late, it increases the risk that something can go wrong or a problem could go unnoticed,” the paper states. “Mechanisms must be put in place to deliver the right information to the right eyes at the right time, from the field to the boardroom.”

One way to do this, according to the paper, is with systems like the digital pen that can transmit information between sites and data analysts in near real-time, conveying both patient data and detailed metrics on the operational performance of each individual site. Health Decisions uses a digital pen called the SmartPen. With this device, the CRO says, doctors and nurses at sites fill out the paper form once, and the information is automatically transmitted to the HD360 system – “eliminating a time-consuming step in the process as well as the error-prone keyboard entry, making better data available as soon as possible.” The pen transmits an exact digital copy of the CRF to the Health Decisions data management team, the organization noted.

Another element for minimizing risk and ensuring trial success, according to the Health Decisions paper, is to implement consistent training and processes across the entire study. The report recommends using an online management system designed in house or provided by a CRO to ensure that all sites have a proper understanding of the tools with which they’ll be working. This system should also facilitate the flow of information between sites, monitors and managers, so that site performance can be tracked and information shared remotely, the paper says.

“Say your clinical trial requires specific placement of a stent,” Cormican said. “It would be of utmost importance for all investigators at all sites to be meticulously trained to complete the procedure to identical standards. If there is any exception or deviation from this standardized procedure, the risk of adverse device experiences would increase dramatically, along with the likelihood of inconsistent data results that would jeopardize regulatory approval.”

An example that is less life-threatening but just as important to maintaining scientific viability and preventing undue regulatory rejection, Cormican said, could be a device study collecting patient data via site-administered subject questionnaires. “In this case, it would be important to limit inter-trainer variability by making sure that all investigators and sites receive the same training. This would minimize inconsistencies in the way questions are asked, recorded and submitted to data analysts, and would also reduce the frequency of queries, thus making it less likely that the final data would contain errors.”

 

Share Button