Adaptive Monitoring

Today we filed a patent on execution of adaptive monitoring, and I was again struck with the question of why our industry doggedly sticks with the 1950’s model of how we assure field data quality.

The concept of sending people out to the field to check the data made sense a couple of decades ago. After all, the basis for progression is data, and what could be more important than assuring that the data we collect are accurate?

As an industry, we spend about one-third of study budgets checking a database against source (defined as the first place data are recorded, often a patient chart), meaning billions every year. It might seem odd that we spend this kind of money on laborious, slow, and error-prone methods to ensure that the data we collect is accurate. And it wouldn’t be so bad if we got a return on that investment, but (1) being a manual process, it is slow and actually introduces and misses some errors as well as corrects others, (2) it occurs at intervals of a weeks or longer, meaning that the data cannot be used for weeks to months at best and years at worst, and  (3) it doesn’t necessarily assure data quality, as witnessed by several large studies that had to be done over from scratch because of poor data controls.

In a world where credit card charges show up before you even make it home to check—and which absolutely must be accurate—how could we do better?

The answer lies in continuous assessment of what happens in the field and a means of allocating resources according to nature and magnitude of the issue. If this looks familiar, it’s because it’s been long used in manufacturing and other areas, dating back to Edwards Deming and the post-World War efforts that turned Japan from a manufacturer of cheap goods into an industrial powerhouse. It’s called statistical process control. It’s also what the FDA recently encouraged the industry to use.

Our patent is fundamentally SPC as applied to monitoring. You measure both direct data quality (through several different measures), indirect measures of quality (mostly passively measured elements such as experience, timeliness, responsiveness), and composite measures. Stir them all together, use some statistics (which can be simple or quite elegant, including recursive machine learning) to help continuously sift through what’s meaningful, and you end up with a recipe that allows us to decrease field monitoring by 75% over traditional, with better quality. We know this works; we are currently using it on a large global study, where it also facilitates a ripple effect of other changes, the most marked of which is centralized, immediate management that has also reduced rework by about 50%.

Most important, this is the foundation for the need for continuous, clean operations that enable a fundamental leap in efficiency. The immediacy of operations information allows us to enroll studies six times better than industry averages, and it enables immediate strategic decision making that is the heart of Bayesian and other approaches. Most fundamentally, it transforms drug development from a staccato, black-box, risky development process into a smooth, continuous one where both where we stand and where we are likely to end up are integrated in the development process.

I’m interested in your thoughts—have you considered adaptive monitoring? What impediments do you see? Does adaptive monitoring have a place in your company in the future?

Share Button

Leave a Reply

  • (will not be published)