The pharmaceutical industry is focused on developing new drugs as quickly and cost-effectively as possible. A frequent frustration is the delay between regulatory approval and payers’ coverage decisions.
You may wonder why the same randomized controlled clinical trials (RCTs) that pharma companies conduct for regulatory approval would not also suffice payers. After all, the trials used to establish the efficacy of a drug generally compare two or more treatments (or one treatment compared to placebo) and are a time-honored strategy for high-quality, reliable evidence for decision support. These elegant comparisons use randomized treatment allocation plus blinded outcomes assessment and extensive data monitoring to ensure quality and completeness.
Pharmaceutical companies have a lot riding on these pivotal trials. They not only want clean comparisons and unbiased observations, they also want to enroll the patients most likely to respond to the new treatment and the most well-trained doctors and optimal settings, often working in academic facilities with extensive resources at hand. They often focus on fewer patients to reduce confounding (hidden effects), but the relatively small number of patients, highly restrictive inclusion/exclusion criteria, short follow-up periods, and tests and measurements generally not used in clinical practice can make translation difficult.
Payers are left wondering how their more diverse population base will respond to treatments administered in typical care settings and whether short-term benefits including surrogate markers will translate into longer-term benefits that outweigh risks and at what cost. They are particularly interested in comparative benefits, risks and costs compared with other treatments that are commonly used for these same conditions, including delayed benefits and risks.
The anticoagulant example
The difference between pharma-originated studies and payer interests can be illustrated simply by anticoagulant studies. In RCTs of anticoagulants, patients are usually examined for venous thromboembolism by ultrasound and venography, whereas in typical care settings, VTE is identified symptomatically rather than through regular radiographic screening. Further, warfarin, a long-time mainstay anticoagulant, has been used as the comparator in trials of newer oral anticoagulants. The success of warfarin depends on careful attention to dose titration, which can be conducted magnificently in classical RCT; in real-world settings, however, patients don’t present with such regularity for dose titration and, because it is not performed optimally, the real-world experience of patients on warfarin is much different from products that do not require such careful monitoring.
Although this warfarin example may sound extreme, there is a growing body of data showing the exclusion criteria used in RCTs often eliminate a large proportion of patients who need treatment. For example, a recent study by Pfizer looked at the Swedish Rheumatology Registry linked with other national data and found the exclusion criteria for trials may eliminate nearly half of the patients expected to be treated with the product in the real world.  While RCTs have the luxury of limiting studies to patients most likely to show a response and less likely to experience serious adverse events, payers generally cannot impose such restrictions on their covered populations.
Payers regularly make and update decisions for coverage and formulary placement of pharmaceutical treatments. Their goals are fairly clear and consistent — they need to assess products in the context of the populations they cover, which include patients with serious comorbidities, pediatric and geriatric patients, those on multiple medications, etc. Their decisions are expected to be evidence-based and clinically viable and to translate to healthier patients, less resource utilization, and lower costs.
Currently, many payers routinely use their own healthcare data to examine current practice patterns and clinical outcomes and evaluate changes in their coverage decisions and treatment guidelines to gauge real-world effectiveness. They also reportedly leverage published literature and other data reports for decision-making, while placing the highest evidence on RCTs (or sometimes even higher evidence on meta-analyses of those RCTs). While real-world evidence may sound like the panacea here, it is not always timely or useful, particularly when there is little real-world usage in patient groups of interest. In fact, a recent study for US payers found that the most frequent use of RWE was to evaluate safety concerns.  We hear that often payers/clinicians don’t quite understand current RWE methods, data, and/or how to interpret the results. This can make it difficult to assess evidence quality, whether the data are “fit for purpose,” and whether the design and analytic methods are reasonably robust.
A number of issues that prevent payers from fully leveraging RWE evidence in decision-making were cited, including relevance of study endpoints, timeliness of results and transparency of study methods, and lack of access to knowledgeable staff resources who could help them access appropriate data and interpret RWE, including issues relating to bias and confounding, incomplete data, data mining, lack of universally accepted definitions, and lack of investigator expertise. 
The Optimal Solution?
Pragmatic randomized trials offer a hybrid approach to the classical RCT. They generally use randomization to assign treatments, thereby ensuring that patients of interest will be treated with the study drug under evaluation. These rigorous pragmatic trials are valuable for contributing real-world insights about how patients use treatments, how well they work (and for whom), and whether patients benefit from “as used” treatments as actually provided in various settings. The key to these trials is treatment randomization, which is used to achieve balance in risk factors between treated groups.
One aspect of pragmatic trials is that the outcomes studied are of clear interest to patients, practitioners, and payers, generally avoiding many of the surrogate endpoints used in classical RCTs. Multiple comparators are often used, collectively called “standard of care,” which refers to whatever is used locally, regardless of whether it meets practice guidelines. Placebos are not used, nor are treatments single- or double-blinded.  Data are generally collected through routine clinical visits with minimal supplementary data collection for study purposes. Often patient-reported outcomes are used to supplement clinical assessments. There is little, if any, source data verification. Data sets are curated for accuracy and completeness of data and linkages, but the accuracy of individual data elements is rarely verified, especially since some RWE are the source documents, like electronic medical record data. These pragmatic outcomes studied in real-world settings using light-touch/hands-off approaches allow the pragmatic RCT to show how the product works in the real world as opposed to guessing how RCT results might translate into actual practice.
The pragmatic vision
The vision of convincing pharma companies to have a more pragmatic focus and use RCTs with RW outcomes for both regulatory and payer needs is more aspirational than realistic for now, in large part due to the absence of clear FDA guidance as to what pragmatic RCT for label expansions should look like. What is evident is that these pragmatic trials work best for understanding clinical effectiveness and safety in products that are already on the market, and classical RCTs are better suited to monitor safety in patients treated with new, not-yet-approved molecular entities and conditions.
However, since the guidance on using pragmatic RCT is likely to be developed in support of the mandate of the 21st Century Cures Act, the question for payers will be how to reliably gauge the impact of differences between their covered populations and the health practices and patients studied. Can you extrapolate from Cleveland to Clearwater? The experience of Medicare or Medicaid populations is likely to be different from populations covered by their employers. But aside from large socioeconomic differences, questions about how much factors like ethnicity, environment, and employer/health plan could affect study results is less clear-cut.
What is clear is that we need real-world comparators so that we can address everyday choices, and we need information on diverse patients, not just those most likely to respond to treatment. With some modest modifications to the RCT paradigm, such as described here for pragmatic trials (RW outcomes, comparisons to other active treatments, etc.), payers and clinicians can much more easily understand how treatments can benefit the types of people they cover.
RWE can be an important supplement to the current evidence being evaluated by payers. Payers will benefit by the growing use of RWE by regulators, including current efforts by the FDA to establish a framework and guidance documents for use of RWE. Pragmatic trials introduce balanced treatment arms (and systematic product use) through randomization and then use systematic, purpose-driven data collection to evaluate outcomes that matter to patients, clinicians, and payers. The use of these familiar tools, as well as growing regulatory acceptance, should help address concerns among those who are unfamiliar with real-world data and methods and who worry that results could be largely attributable to systematic flaws in the design, analysis, or underlying data.
Imagine the efficiencies in getting better treatments to patients faster if payers’ and regulators’ interests could be addressed in a single study design, like a pragmatic trial. The time is now.
About the authors
Nancy Dreyer is chief scientific officer and SVP at IQVIA. She focuses on generating real-world evidence for regulators, clinicians, patients and payers through pragmatic trials and non-interventional approaches. She is a Fellow of both DIA and the International Society of Pharmacoepidemiology.
Matthew W. Reynolds, PhD, is VP, Real World Evidence at IQVIA. He designs innovative solutions for real-world evidence on effectiveness and safety. He founded and is current Chair of the Database Special Interest Group for the International Society of Pharmacoepidemiology.
1. Frisell T, Jelinsky S, Peterson M, Askling J. Impact of varying exclusion criteria on treatment response: real-world evidence to guide the design of future clinical trials. Annals of the Rheumatic Diseases 2019; 78. Issue Suppl 2.
2. Malone DC, Brown M, Hurwitz JT, Peters L, Graff JS. Real-World Evidence: Useful in the Real World of US Payer Decision Making? How? When? And What Studies? Value in Health 2018; 21: 326-333.
3. Hampson G, Towse A, Dreitlein WB, Henshall C, Pearson SD. Real-world evidence for coverage decisions: opportunities and challenges. Journal of Comparative Effectiveness Research 2018; 7(12): 1133-1143.
4. Christian JB, Brouwer ES, Girman CJ, Bennett D, Davis KJ, Dreyer NA. Masking in pragmatic trials: who, what and when to blind. Therapeutic Innovation & Regulatory Science. First published online, May 2019. https://doi.org/10.1177/2168479019843129