YouTube Facebook LinkedIn Google+ Twitter Xingrss  

The Case for Making All Studies Operationally ‘Adaptive’



By Deb Borfitz

July 20, 2009 | The time- and cost-saving benefits of adaptive clinical trials (ACT) will be lost on companies unless they can generate an uninterrupted flow of information on which adaptations will be based. Getting the best information as fast as possible is mission critical with ACTs because the decisions made reroute the study path and impact final outcomes, says Mike Ford, head of data management at Health Decisions.

Operationally, sponsor companies likewise need to be “adaptive” to optimize how studies actually get run, says Ford. Enabling study teams to make swift and smart choices boils down to four steps:

1) Identify all data sources. “Case report forms [CRFs] are only one source of data on a trial,” says Ford. “Capturing the metadata, information pertaining to all other aspects of the trial…is important to understanding the full picture of a trial’s health.”
2) Integrate the study and metadata sources to generate immediate, reliable metrics and reports, putting them in front of the right people at the right time so every team member has the information necessary to address problems and make timely decisions. One tool Health Decisions uses on ACTs is a “widget” that provides up-to-the-minute desktop views of enrollment, the screen failure rate, number of active participants, study data in the queue for monitoring, and other snapshots of progress, says Ford.
3) Engage the entire study team in continually analyzing those metrics and finding ways to override the bottlenecks. Ford says team members should be routinely asking themselves: What is the current state of the study? What work is left? How are we performing? How much time is left?
4) Encourage “adaptive actions” based on informed decisions to keep the study on track and on budget. That might mean closing down non-enrolling sites or correcting an error on a CRF generating an unusually high number of queries.

Mike_Ford
Mike Ford
Ford says the savings associated with adaptive study monitoring are impressive. Monitoring trial data can consume as much as a third of the overall study budget when clinical research associate (CRA) visits happen according to a pre-defined schedule (customarily every six to eight weeks). High-performing sites get monitored as frequently as under-performing sites. In contrast, adaptive monitoring takes into account each site’s performance, so that monitoring visits get scheduled only when there is ample work needing to be done at a particular site.

Adaptive monitoring practices also keep trials running smoothly by distributing CRAs’ workloads according to their individual abilities and capacities. Monitors with the lighter load might double up on site visits within a given geography. Those overburdened with work could get assistance from a co-monitor. But to make these types of adaptations, study teams need a way to visualize the hot spots and dead zones of monitoring activity before they are on site.

Traditional monitoring for a 2.5-year, 100-site study might involve 1,625 monitoring visits, Ford continues. The same study, using adaptive monitoring, could be accomplished with 38% fewer visits. The savings would accrue from orchestrating CRA travel around estimates of the database size (in this example, 1 million fields) and how many fields per day the average CRA can verify (1,000), as well as sending CRAs to sites only when there is sufficient data to fill a full day.

Successful execution of an ACT relies on sites’ ability to conveniently collect data, says Ford, so “alleviating site work load” should be a top concern. Study teams can’t identify preventable errors and take corrective action until data are submitted and analyzed. And that’s an important detail, given that the estimated cost of correcting an error on a typical clinical trial can cost as much as $350 per query. On a study with 1 million data fields, a 5% error rate would generate 50,000 queries. Even a single percentage point drop could result in up to $3.5 million in savings.

Refining trial operations in this way requires a “strong, integrated technology infrastructure capable of processing and interpreting huge amounts of both data and metadata,” says Ford. Health Decisions utilizes EDC technology it calls Smartpen, which seamlessly combines with its trial management system to automatically clean and analyze much of the data. The system then converts that data into relevant reports “designed to validate adaptive decisions about the future course of the trial.”

Health Decisions has only run three studies with adaptive design components, Ford says. But from an operational standpoint, “every trial we implement for sponsors is adaptive.”

Click here to log in.

0 Comments

Add Comment

Text Only 2000 character limit

Page 1 of 1


For reprints and/or copyright permission, please contact  Terry Manning, 781.972.1349 , tmanning@healthtech.com.