Feds Elevate Comparative Effectiveness to Address Future Healthcare Development

Publication
Article
Pharmaceutical CommercePharmaceutical Commerce - March 2009

Evaluating medicine and healthcare by their outcomes is now an NIH priority

Of all the new initiatives proposed or undertaken in U.S. healthcare, comparative effectiveness research (CER) holds perhaps the greatest potential for patient benefit, and economic disruption as well. CER, which falls under the broad category of “evidence-based” medicine, investigates the safety, efficacy, and in some cases long-term outcomes of two or more treatments — drugs, devices, biologics, or surgery – head to head in matched populations. CER may consist of new, controlled, human trials or retrospective literature studies. CER is not a novel idea by any means. Drug companies have conducted “comparator” studies for years on both drug-drug and drug-surgical alternatives, as have federal health agencies.

During the last presidential election cycle all major candidates counted CER among their healthcare proposals. True to his word, President Obama allocated $1.1 billion, as part of his economic stimulus plan, for CER. The money will be divided between NIH and AHRQ (Agency for Healthcare Research and Quality), which will fund new patient studies and retrospective meta-analyses of already-published trials.

In March NIH formed a panel, the Federal Coordinating Council for Comparative Effectiveness Research, which will coordinate research and guide disbursement of the $1.1 billion but will not recommend clinical guidelines. The Council is expected to issue its first report by July 30, 2009.

NIH and AHRQ efforts will not be restricted to drugs, which make up just 10% of U.S. healthcare spending. NIH has published nearly fifty pages of diverse research topics, and AHRQ has already conducted studies comparing surgical and drug treatment outcomes. The first results from the new round of studies should accrue within two years, but the full impact will probably not be felt for a decade.

While $1.1 billion is a relatively small sum (electronic medical records received $19 billion), and its full impact will take time, all stakeholders have been duly notified that a new paradigm for evaluating pharmaceuticals has emerged.

LEON COSLER, ALBANY COLLEGE

CER will operate at two separate levels. At the top will be comparative effectiveness, which answers the question “which drug works better?” The second level, reimbursement, “is where the rubber meets the road,” says Leon Cosler, PhD, director of the Research Institute for Health Outcomes at the Albany College of Pharmacy and a former research director for the New York State Medicaid Program. This is where lifecycle-and-death decisions will be made, where winners and losers will be created. At both levels, genetic testing forms a critical third dimension for matching drugs and patients optimally, and for maintaining diversity within a drug class. Still, depending on the timing and implementation, one can expect to see the “mother of all battles” fought here on grounds of study design and integrity, pharmacogenomics, patient needs, and conflicting science.

Not so NICE

By distinguishing medicines on the basis of efficacy and safety, CER provides an outcome input for estimating relative risk-benefit. Coupling CER with cost gives pharmacoeconomics, an even more robust rationale for promoting, preferring, rationing, or even denying certain treatments.

Pharmacy benefit management (PBM) firms have for years approximated this through tiering, a practice that justifies higher or lower co-pays as incentives by assuming approximately equal safety and effectiveness for approved drugs in a particular class. CER, if broadly applied, would provide PBMs with another factor to add to their equations.

CER has for years been popular among European government single-payers, many of whom have formed agencies charged with conducting “health technology assessments.” The most notorious among these groups is the UK’s National Institute for Health and Clinical Excellence (NICE — an acronym of which George Orwell would be proud). A few accounts of this agency’s recent activities demonstrate why many in the U.S. fear this approach to managed care:

  • In April 2008 NICE reversed a decision under which patients could receive Novartis’s Lucentis, a biological treatment for macular degeneration, only if both eyes were affected or if the patient had already lost sight in one eye. But even this beneficence had a catch: Novartis had to pay for treatment beyond the first fourteen injections. NICE simultaneously rejected Pfizer’s Macugen, also for macular degeneration, on the basis of cost even though the agency itself had determined it was more effective than Lucentis. It all came down to numbers: Macugen’s cost per quality-adjusted life-year (QALY) was 70% to 360% higher, a difference even someone blind in both eyes would notice.
  • Last month NICE caused another stir when it determined that GlaxoSmithKline’s Tyverb breast cancer treatment was too costly based on patients’ expected QALY. Tyverb is already reimbursed in sixteen European countries. NICE calculated a cost of £70,000 (about $100,000) per QALY for Tyverb, which made the drug “not a cost-effective option compared to current standard therapy with capecitabine.” The sponsor’s calculated QALY figure, £16,000 ($22,500), is generally accepted as more credible. In applying for reimbursement in the UK, GSK offered to pay for the first twelve weeks of treatment, long enough to determine if the drug was working, but NICE would not budge. Such risk-sharing deals are common in Europe.

In its defense NICE must make do under budgeting constraints that dictate decisions come down to simple arithmetic. “It’s a very well-defined line,” says Brenda Gleason, MPH, president of M2 Health Care Consulting (Washington, DC), a health policy consulting firm. “If you clear it you’re covered; if you don’t you’re not.”

US policy fashioned after NICE would be a radical departure from current US reimbursement practice. It would also create winners and losers which, as any pharmaceutical executive will tell you, stifles innovation. But here this overused expression rings true: NICE would not enjoy its abundance of choices — or have a reason to exist – without the abundance of “expensive, unnecessary” drugs developed in the US.

Yet, we are told, we have nothing to fear. “There is trepidation that the US system will adopt draconian practices that focus too vigorously on cost and not clinical effectiveness,” notes John Doyle, DrPH, Managing Director at Quintiles Consulting (Hawthorne, NY). “However, the government has done a fair job of communicating that this is not their intention. The language inserted into the stimulus bill makes it clear that the objective will be clinical effectiveness, not cost-effectiveness.”

But if “cost-effectiveness” is not the point, what is? Hitching a cost model onto the CER bandwagon will not be difficult, the original bill’s wording notwithstanding. The question is whether — or perhaps when, and under what circumstances – U.S. regulators and payers will use that information to withhold or ration treatments, as is done in Europe today.

Co-pay barricades

Lacking a scientific, legal, or regulatory basis for restricting access to expensive medicines payers have thus far exercised restraint. But restrictions are “inevitable,” says John Carlsen, senior director for health economics at Covance (Gaithersburg, MD). “Insurers would like to do more to address differences between drugs, but right now they feel their hands are tied. Many have told us they are looking for Medicare or the government to do something first, which is still a long way off. We are beginning to see political momentum in this area, but it will still be a while before comparative effectiveness research has an impact on coverage and reimbursement.” Regardless, Carlsen says, a NICE model is “unlikely” for the US, which will most likely base its system on effectiveness, with cost a secondary consideration.

Payers may enact restrictions such as prior authorization, or assign a higher co-payment, but for the time being any drug labeled and approved for a particular condition will be covered. The exception to this has been the use of red blood cell-boosting biologic therapies, such as erythropoietin, which are expensive and have been associated with serious side effects. Medicare at one time restricted use of these agents beyond what the label suggested, but FDA has since ordered label changes to reflect safety issues.

Treatments that suffer formulary or reimbursement penalties on the basis of large-population CER trials might be found to be effective for subgroups, which is the point of personalized medicine. To date, comparative effectiveness has been carried out on large, undifferentiated populations, which provides on-average efficacy estimates at best.

Personalized medicine, which may ultimately save the drug industry from cost-cutting frenzy, applies to slices of populations deemed, by virtue of a genetic test, to be “responders.” CER and personalization are in fact complementary, and their marriage “is what everyone is hoping for,” says Dr. Doyle of Quintiles.

But not any time soon. Personalized medicine has been given a ten to twenty-year timeline before it gains traction, and the $1.1 billion for CER is only a start. “Meshing those two trends will be tricky,” observes Jane Horvath, senior director for policy at Merck (Rahway, NJ), “but it can absolutely be done. There will be challenges for our industry, but I also think there are opportunities [as well].”

Genetic texting—not

To say that synergies between personalized medicine and comparative efficacy will require coordination is an understatement. Insurers, for example, are not uniformly enamored of pharmacogenomic testing, even when the evidence appears compelling. “They tend to pay if the drugs involved are expensive, where it’s an economic no-brainer,” says Kristine Ashcraft, marketing director at pharmacogenetic testing company Genelex (Seattle, WA).

As you read this Ashcraft is appealing a decision by insurer United Healthcare not to cover a genetic test that predicts a woman’s response to tamoxifen, a cancer preventive drug routinely administered for five years after successful removal of a breast tumor. Women lacking a specific gene, CYP2D6, cannot utilize tamoxifen, and have a four-fold higher risk for breast cancer recurrence than women whose bodies can metabolize the drug.

Since the gene is inactive in only 7% of women the cost of the test ($295) compared with tamoxifen’s (about $1.30 per day) becomes a factor. What complicates the issue is that another drug, anastrozole, works like a charm for non-responders but costs $6.50 per day. An insurer who does not reimburse for the test saves $29,500 outright, plus $66,430 (the cost differential for five years of anastrozole vs. tamoxifen) for every 100 women in this category under their care. Testing clearly does not “pay” here, even though medical experts believe it to be the standard of care.

Tamoxifen is by no means an unusual case. Approximately half of all prescription drugs undergo metabolic processing that varies from person to person according to the patient’s genetic makeup. It takes about six years for insurance companies to acknowledge benefits of a pharmacogenetic test, “but for the right patient these drugs make a difference right away,” says Ashcraft.

Testing benefits are more clear-cut for other drugs. A genetic test to determine the optimal dose of the blood thinner warfarin, costs as much as $600. Yet the cost of hospitalization for blood clots or excessive bleeding more than justifies testing every patient. “If you save one hospitalization for every 100 new warfarin users, you more than offset the cost of testing all 100,” said Dr. Robert S. Epstein, chief medical officer of Medco Health Solutions, in a New York Times interview in late 2008.

A hard sell?

Not everyone agrees that since more information is always better, CER can have no down side. Because medicine improves by baby steps, with slightly more-effective but much costlier drugs leading the way, a purely pharmacoeconomics approach unduly penalizes the foundation for next-generation improvements, and those that come after that.

VAL JONES, MD

Childhood leukemia was nearly universally fatal forty years ago; today the cure rate is 85%. “This occurred incrementally,” says Val Jones, MD, a blogger and health policy expert at MedPage Today, by approval of drugs with incremental benefit that might not pass a strict NICE-style pharmacoeconomic standard.

And if history is any indication, CER will be a hard sell to practitioners. The huge ALLHAT (sponsored by National Heart, Lung, and Blood Institute) and CATIE (National Institute of Mental Health) trials, begun during the 1990s to compare blood pressure medicines and antipsychotics, respectively, cost more than $200 million and found that older drugs equaled or out-performed newer, more costly medicines. Yet ten years later prescribing habits have barely changed.

Scott Gottlieb, MD, former FDA official and now a fellow at the American Enterprise Institute, has written that ALLHAT and CATIE were poorly conceived and had serious flaws. Gottlieb favors CER but worries that “current plans for CER do not address the lessons learned from CATIE and ALLHAT.” He and others have argued that extensive literature analysis of previous trials might be a better use of resources than large, expensive studies.

Top drug firms will not likely admit to the difficulty of constructing a scenario in which a top-selling product gains from comparative studies. The argument that specificity and safety lead automatically to premium pricing is fine in theory, but the details on how that would occur given competing stakeholder interests is far from clear. Moreover the necessary studies, approvals, and labeling cannot be orchestrated to synchronicity, or even timeliness: By the time the label incorporates a companion test the drug patent may have expired or a new competitor might have emerged.

Industry advisors recommend a wait-and-see approach to the federal efforts. For one thing, the actual processes for conducting CER remain to be determined, says Thomas Bramley, senior director at AmerisourceBergen Specialty Group’s Xcenda unit (see Fig. 1). In the meantime, biopharma companies should evaluate existing comparative data on their products, monitor the actions of the Federal Coordinating Council, and assess the exposure of their product portfolio to agency attention.

Top drug firms are approaching CER confidently, at least in public. Jane Horvath of Merck believes her company has much to gain from comparative efficacy. “Payers, providers, and patients want data that is independently generated. Those are our customers, and that is why we support it.”

But it is more likely that CER will not be an easy sell, and its implementation will ruffle a lot of feathers. Leon Cosler of the Albany College of Medicine does not expect the private sector to “run with this [CER] baton.” This is not surprising, given the implied goal is to align the US healthcare system more closely with those of other industrialized countries, where an agency independent of approval regulators determines cost-effectiveness. On this point Cosler is not alone. Princeton health economist Uwe Reinhart has been pushing strict pharmacoeconomics since the early 1990s. More recently economist Gail Walinsky, currently a senior fellow at Project Hope and healthcare administrator in the Clinton administration, advocated formation of an agency to quantify health outcomes.

Still, the genie of CER will not go back into hiding voluntarily. “At the end of the day payers want simple tools to manage the utilization of expensive medical products,” says Quintiles’ Doyle, “and the best way is through evidence-based medicine.”

CER represents the opportunity, using the tools already in our possession, to guide healthcare optimally in an environment of increasingly over-stretched resources. But to work, says Cheryl Hankin, PhD, president of health outcomes research firm BioMedEcon (Moss Beach, CA), CER must be transparent, collaborative, scientifically validatable, “and the results must be usable and make sense. Everyone these outcome-based studies reach and touch should have input."PC

Recent Videos
Related Content
© 2024 MJH Life Sciences

All rights reserved.