Pages

Thursday, May 31, 2012

No Quick Fix for Comparative Effectiveness

This month’s Science Matters column in START-UP looks at a recent assessment of characteristics of the clinical trials recorded in the US-based registry ClinicalTrials.gov, and suggests that the questions raised in that paper for interventional trials also hold for other strands of clinical research, notably those focused on comparative effectiveness.

The authors of the assessment, which was reported in the May 2 issue of the Journal of the American Medical Association, noted that ClinicalTrials.gov suffers from defects in methodology and standardization. The problems are even more acute with comparative effectiveness research (CER), and one reason FDA places scant value on observational studies, at least as currently conducted.

Addressing those issues -- in both realms -- is critical. “In our traditional evidence development framework, we were trialists or observational scientists but rarely both,” Richard Gliklich, president of the Outcome unit of QuintilesTransnational, told attendees at the annual Post-Approval Summit held earlier this month at Harvard Medical School. “In the emerging framework, this false dilemma is no longer affordable. There are too many questions to answer, too many settings, too many populations.”

It’s easy to find examples of CER methodologies causing confusion rather than creating clarity. A recent CER-skeptical story in The Wall Street Journal led with two studies using the same UK patient database drawing very different conclusions about whether osteoporosis drugs increased the risk of esophageal cancer. It’s also easy to try to draw a dividing line between efficacy and effectiveness research. But the discussion is more nuanced. As Gliklich said, both approaches are needed. In each realm, data need to be gathered using methodologies that allow for apples-to-apples comparisons.

Gliklich made his remarks introducing Summit keynote speaker Michael Rosenblatt, CMO at Merck, who went on to highlight many of the key challenges around CER. For example, if a data set is biased, “you can get the wrong answer, but with great precision,” he said: in many cases it may possible to detect very small changes in risk estimates, but not understand whether they are clinically significant or not. Plus, “something you would think would be clear-cut like a diagnosis of a myocardial infarction, where you have cardiograms and a blood test, still has about a 15% miscoding rate,” he said. In such a case, comparing one drug’s side effect to another’s where one drug might have a meaningful but small percent difference in efficacy, would be impossible.

Making a CER framework valuable is a formidable challenge. Health care provider systems all do things differently: can they rely on outside studies, even the best from places, or does the analysis still have to be done institution by institution? And if so, do they have the resources? We put that question to Kevin Tabb, CEO of Beth Israel Deaconess Medical Center in Boston, a panel participant at a May 18 symposium held at the MIT Sloan School of Management on health care costs, following the meeting. “It still has to be done institution by institution,” he said. “It’s incredibly expensive and we don’t have the tools to do it.”

In his opening remarks to the MIT Sloan gathering, Massachusetts Governor Duval Patrick referred to the 2006 Massachusetts health care reform legislation. That a solution is not perfect is not a reason to not do anything, he said: “It’s [not a matter of] a perfect solution versus no solution.”

We hope that observation holds for the newly formed, high-profile Patient-Centered Outcomes Institute, the entity charged with enabling much of the US’s future CER efforts.

PCORI has spent much of its first year debating and drafting methodological guidelines and standards, which will be posted in draft form next week. (For more on PCORI’s preparations for the release of its methodology report, look here.) But industry groups have criticized PCORI (as reported here and here, for example) for its timing and the lack of specificity of its proposed research agenda, which will be a considerable departure from the more familiar investigator-led study design format. Its start-up was at first deliberate, as befits a public-private partnership trying to obtain a popular buy-in to CER without stirring up fears of drug rationing. Now, however, PCORI seems to be in a more frantic hurry-up mode as it seeks to dole out an initial $120 million in research funding by the end of 2012 -- only issuing guidelines for its initial funding announcements after a Board of Governors meeting May 21.

PCORI's invocations of the value of patient-centeredness have sounded simplistic at times, like Dorothy following the yellow brick road to the wonderful land of Oz.  We know that's not the case, and that it's easy to take shots, like the WSJ did, at CER in any form.  But we also know the road ahead is unpaved and will be bumpy, requiring serious and careful navigation. Duke's Rob Califf, first author of the JAMA paper on ClinicalTrials.gov, made the case more succinctly and pointedly perhaps than PCORI itself has managed. Establishing that a drug has some efficacy in a clinical setting does not answer the real-world questions of how to use it, when to use it, how long to give it and how to compare it with others, he told us. “That’s what comparative effectiveness is all about and where you need the spectrum of different kinds of observational studies and randomized trials."

No comments: