Current Drug Discovery
May 2003 - Can technology reduce development time? Introducing electronic systems can actually slow the process – as many companies have found out. However, ‘appropriate’ use of clinical technologies is key. When systems are well designed and utilized appropriately, the flow of information into knowledge used to make good decisions, smoothes and expedites research.
Although we operate at a time when knowledge is increasingly the primary resource for both individuals and the economy as a whole, we still manage clinical trials in a manner that takes remarkably little advantage of modern information processing capabilities. For the most part, clinical evaluations continue to be a sequential, linear series of steps. Data are first recorded by writing numbers on paper. These forms may languish several weeks before being collected and brought to a centralized location, where they often sit for weeks or months before being entered for validation, most often by keypunch-and-verify methods that evoke the 1960s. Only then do queries make their way back to sites for resolution. This approach effectively prevents us from managing studies, because by the time we learn what has occurred, usually several weeks to months after the fact, it is too late to be able to intervene. Many problems, mostly small but sometimes major, are apparent only in retrospect, often after a study is complete. Some are never discretely identified, leading to the mistaken impression that such ‘noise’ is inherent to the development process. The result is that it is not until the completion of a study that we begin to get a glimpse into not only the study’s results, but how well the study was run.
The pharmaceutical industry recognizes that improving efficiencies is a critical issue. That clinical development is a costly and lengthy process, averaging $800 million and 10 to 12 years, is well known, but the central question is whether these figures represent complexity inherent in the drug development process or whether it reflects at least in part inefficiencies amenable to improvement by tools such as EDC. The costs of EDC systems are not inconsequential, ranging from a hundred thousand dollars for a small phase I study to more than a million for a large phase III study. While it is tempting to measure this investment against an endpoint such as cost of data handling or time to complete a study, the real measure of value lies with our ability to utilize information generated during the course of the study in order to make better, more timely decisions, both in managing this and subsequent studies and in planning that allows better strategic decisions.
Yet common in the industry is the experience of introducing technology, often in the form of electronic data collection systems, and finding little if any improvement in speed of performing clinical trials. Managers planning clinical evaluations now face three questions: should an electronic system be considered? If so, what capabilities are needed, and how do we measure its performance?
Advantages of technology
Although most marketing from companies that sell electronic systems stresses the ability to collect and clean data quickly, improving the way studies are run demands two broad goals: the tactical advantage of being able to run the study itself more efficiently, as a result of having up-to-the minute information, and a strategic advantage of being able to complete the study more quickly. The common element to both is rapid decision making, and any technology is destined to fail at the goal of improving development time unless they can improve the ability to make more timely and accurate decisions. The question posed regarding the advantage over paper and pencil boils down to this capability, and the common industry experience of EDC systems failing to improve study timelines can be traced to the inability of most systems currently marketed to meet this basic goal.
Effective use of data
The most important use of collected data is not so much collecting as transforming data to information and then into knowledge. Most currently-marketed electronic tools (notably those that focus on data collection alone, such as web-based data entry) are limited to data collection. While this is an important step, these tools largely fail to consider that improving development rests on effective utilization of data. This is where most systems have been built around the need for collecting data, basically an incremental change to the paper-and-pencil process borne of the perceived demand from a conservative industry for a change that doesn’t change much.
The industry’s accumulating experience of failure of EDC systems to alter study timelines is understandable in this context. Although most current systems focus on the notion of faster database lock, this limited perspective overlooks a much broader – and harder – role for how technology can reduce development time. Well-applied technology allows data to be collected quickly and focuses on using this information for tactical aspects of study conduction.
A common failure of previous data collection systems such as fax-back systems is that they transmitted data quickly to a central point, but from that point on the usual bottlenecks ensued. For example, data might reach a sponsor the same day a patient was seen, but hand-entering data and running validity checks often took a month or longer. Or, to put it another way, collecting the data is relatively easy; putting to good use is considerably more challenging, in part because use of any tool depends on the manner in which it is used. The best of tools will always be suboptimal if the context that allows them to work to full advantage is lacking.
Strategic aspects of a study are as important as study management. An early indication of the efficacy and safety can save time and money, but this ‘external’ validity is dependent on the study being well run, or internal validity. Without internal validity, external validity is not possible, and even with it, external validity is not guaranteed. Both are critical, and it is with both that real value of technology lies. Strategic success rests on being able to interpret and act on timely information. This translates to the ability to continuously monitor key outcomes, to gain an increasing awareness of key decision points as the study progresses, and to minimize both decision time within a study as well as the time required to take the next step following study completion.
Making it work
The recent focus on data collection, including marketing programs, creates the expectation of being able to install a software product and quickly gain the advertised advantages – most frequently closing databases more quickly. This is equally true when using the software in-house or having the data managed by an outside group (application service provider model). In practice, however, this is rarely the case: most often, the purchaser realizes that there are many other complimentary pieces of the process that need to be aligned, and that these changes ripple through the organization and end up being considerably more profound than antici-pated. Thus, the real challenge lies in being able to change the other parts of what they are doing to enable the new capability to function well. This aspect is surprisingly often overlooked entirely, and even when it is considered proves to be considerably more difficult than antici-pated. Web-based data entry vendors often end up victims of their own promotional claims because they encourage this overly simplistic view – the pharma equivalent of losing weight while you sleep.
There may also be inadequate consideration of the individuals who will be using these systems in the sites. Sites are often the limiting factor, because many electronic systems – web-based data entry in particular – require that site personnel first record data by hand on a worksheet, then enter those data into a computer. Clinical personnel are thus required to function as data entry clerks, a task for which they are neither trained nor, in many cases, eager to pursue. These staff may not even be comfortable with computers or the internet and may also lack adequate technical support and infrastructure. The consequence is expensive, untrained, and unenthusiastic performance of a key function, along with delays because people tend to put off what they do not like doing. Frustration levels are high, tolerance is low, and the result is often poor, untimely data: more than a few studies started with EDC systems have been scrapped in favor of restarting with paper-based data collection systems!
Appropriate use of technology also means that a simple, no-tech solution may be the better choice. For example, a simple diary is far easier for most people to fill out with a piece of paper than a device that has to be started, navigated, and have information entered – sometimes with a stylus or keyboard with which they may not be comfortable. Voice response systems work well for a limited number of simple questions, but anything more complex is often asking for trouble. Ask the company who was talked into using an interactive voice response system for collecting key outcome data as part of a pivotal study and found after study com-pletion that users were so frustrated with the complex system that they resorted to pushing any button to silence it.
Importance of process
The measure of any intervention, particularly technology, must be objective metrics. Software alone will not allow performance improvement, which seems to be what the industry as a whole is currently expecting. The key is the combination of technology, processes, and people. While technology is the starting point, it is an enabler that most often will not by itself change things. Many companies are learning that chang-ing the tools without changing the processes does not lead to improvements and that many data entry efforts alone do not improve the development process. The industry now seems to be coming to the realization that dropping a web-based data entry system in place often fails to change study timelines. One senior manager recently lamented that a major effort with EDC “failed to save even a single day – and it took five additional months at study initiation”. An investment of this magnitude would likely be judged worthwhile if it produced even a hint of being able to save time, but in this case, it did not.
Technology serves as a foundation on which process change must be built. Our experience with many companies who adopt technology, often in the form of EDC, is that even when the technology works perfectly, the decision making process often does not. Data collected cannot be accessed in a form needed to make timely decisions, and staff are often not used to the notion of changing the way they work to demand both timely access and timely decision making. Ultimately, the promise of technology is embodied by the ability to make better decisions earlier in the development cycle than is now possible. That ability is based on the availability of more data of better quality considerably earlier than available. The link between data and decisions is process. As an example, consider a rising dose tolerance study where dosing is confined to a single day. The decision between collecting and evaluating data for a given dose and the decision about moving to the next higher dose currently often takes six or more weeks. With technology, however, data can be collected the day of dosing, summarized late in the day and posted to the web, and decision makers can confer if necessary, but make a decision in time to allow for the next dosing the following day. This speed, however, requires a number of process changes: the data must be quickly collected and cleaned; statisticians must summarize information and well as provide raw data, even when it means working into the evening; and decision makers must be able to access, discuss, and come to some decision regardless of where they are or what time of day it is. Similarly, one of the greatest benefits to a well managed study is the ability to appreciate information gener-ated during the course of a study that allows the follow-on study to be planned even while the initial study is underway and to rapidly initiate that study on its completion.
A fully-integrated system that includes several different options for data collection, a central processing area that performs validation and manages queries, and improved monitoring functions that focus on strategic aspects of study management and results is essential. This approach focuses on quickly collecting data, recognizing the different collection capabilities, interests, and facilities, and on a web-based system of reporting that allows real-time reports to be generated directly from the database (Figure 1). The advantages of an integrated approach can best be demonstrated by objective performance metrics and most importantly by the bottom line. By integrating the system, metrics can be produced that are consis-tently superior to industry averages, often by several orders of magnitude (Figure 2). It is also worth emphasizing that the per-formance reflected here is not the best that can be done, but rather is typical of the system’s performance. A second and broader example of the bottom line is a large global development of a CNS product that resulted in a saving of 1.6 years off the five-year initial timeline and a cost saving of $32 million in direct study costs.
In the final analysis, ‘appropriate’ use of technology is the keyword. Just as other industries such as the automobile industry has markedly changed in response to a large challenge, we believe that the pharmaceutical industry will undergo a profound change in the next few years based on technology adoption. The general guidelines shown in the box may help provide a context for its technology requirements.
Figure 1: Integrated project management system. Note variety of input options and focus on real-time, web-based study progress reporting.
Figure 2: Relative performance of Industry versus Health Decisions. Industry source: Centre for Medicines Research.
Appropriate use of technology
There are times when low-tech or even no-tech works better than slick technology.
Consider the users
All must be comfortable with technology and able to use it as an effective tool: it is the least effective user that will determine a technology’s success. Consider the full range of users – sites, CRO, sponsor, and regulatory bodies as well as the users within each group.
Technology must allow easy customization to accommodate a range of drugs, routes of administration, sponsors, data collection, and a host of other issues. A one-size-fits-all approach is the antithesis of good technology, and minimizing timelines means that the user, not the software developer, must be able to formulate a bespoke solution.
Changes are inevitable in complex processes such as clinical trials, and a good system is able to accommodate such changes, which might range from changing a process to altering forms used, quickly and easily.
Integration with complimentary elements
The system must function as an integrated unit, not simply one piece of an overall puzzle such as data collection. Rapid data collection means nothing if data cannot be cleaned and used quickly for project management and strategic decision making.
Technology changes many related processes. For example, technology demands that many core functions of teams be redefined. The foundation of a team approach is effective dissemination of information, but few companies are good at this aspect of management.
Redefine organizational functions as needed
The organization must be capable of reacting appropriately to incoming information. This means both general management, by quickly spotting trends and addressing issues before they become problems, as well as interpreting the information coming in to make earlier, better strategic decisions based on the availability of more data, of better quality, considerably earlier in the process than generally available.
Every system needs to have an objective measure of success. Consider what that measure is before making a commitment, talk to others who have used the system in regard to that outcome, and measure investment against that outcome.
Reprinted from Current Drug Discovery, May 2003.