Generating real-world evidence at scale using advanced analytics

| Artigo

Advanced techniques for generating real-world evidence (RWE) help pharmaceutical companies deliver insights that transform outcomes for patients—and create significant value. McKinsey estimates that over the next three to five years, an average top-20 pharma company could unlock more than $300 million a year by adopting advanced RWE analytics across its value chain.

Opportunities to improve outcomes for patients are proliferating thanks to advances in analytical methods, coupled with growing access to rich RWE data sources. Pharma companies can collect hundreds of millions of records with near–real-time data and deploy them in use- and outcomes-based research and in risk-sharing agreements with stakeholders. Yet although the outlook is promising, many pharma companies still struggle to deploy advanced analytics (AA) effectively to generate RWE. In this article, we explore how to set up analytics teams for success, select use cases that add value, and build a technical platform to support AA applications—from early efforts to deployment at scale.

Create interdisciplinary RWE teams with AA expertise

Historically, RWE departments have employed professionals with expertise in biostatistics, health policy, clinical medicine, and epidemiology (including pharmaco-epidemiology). These experts have extracted value from real-world data (RWD) by using classical descriptive and statistical analytical methods. To add advanced analytics to the methodology toolbox, pharma companies must make two key organizational and cultural changes: creating teams with expertise in four disciplines and working to combine the strengths of their different analytical cultures.

The composition of teams

To deliver RWE analytics at scale, teams need experts from four distinct disciplines.

Clinical experts provide medical input on the design of use cases and the types of patients or treatments to be considered. By taking into account clinical-practice and patient access issues, they can also help interpret the patterns in data. When a project ends, they help the organization to understand the medical plausibility of the results.

Methods experts with a background in epidemiological and health outcomes or in biostatistics ensure that the bar is sufficiently high for the analytical approach and statistical veracity of individual causal findings. People with this profile have often played, and can continue to play, the translator role (see below), which is responsible for securing intercompany alignment on, for example, study protocols and the validation of the main outcomes of interest.

Translators act as intermediaries between business stakeholders and clinical and technical teams. They are responsible for converting business and medical requirements into directives for the technical team and for interpreting its output in a format that can be integrated into business strategies or processes. Translators may have clinical, epidemiological, or analytics backgrounds, but this is not a prerequisite. Communication and leadership skills, an agile mindset, and the ability to translate goals into analytics and insights into actions and impact are the prerequisites.

AA specialists assemble the technical data and build the algorithms for each use case. Statisticians design robust studies and help minimize biases. Data engineers create reusable data pipelines that combine multiple data sources for modeling. Machine learning (ML) specialists use the data to build high-performance predictive algorithms.

These four groups of experts work iteratively together to deliver use cases, much as software teams deliver projects via agile sprints. This approach enables teams to test early outputs with clinical and business stakeholders and to adjust course as needed.

Bridge different cultures in the analytics team

Pharma companies tend to draw analytics professionals from two historically separate cultures with different priorities and methodological groundings.

The culture of biostatisticians is based on explanatory modeling1: simulating RWD conditions to mimic randomized clinical-trial (RCT) settings, so that the team can extract the right causal relationships from the data. Explanatory modeling relies on techniques such as propensity scores to mimic RCT conditions and on confidence intervals and p-values to assess the robustness of findings and to avoid false discoveries.

By contrast, the culture of data scientists and AA/ML practitioners focuses on predictive modeling: the use of flexible models that can capture heterogeneity in data and offer good predictions on novel examples. Predictive modeling relies on methods such as boosting and ensembling, which tend to work better in big data environments. These experts seek to maximize predictive performance, even (sometimes) at the expense of explanatory insights.

This dichotomy, of course, is an artificial one, and many roles straddle these two cultures: for example, many data scientists have a background in biostatistics, and many epidemiologists and biostatisticians understand data engineering and machine learning workflows on large data sets. Combining these analytical cultures into one multidisciplinary team is critical to make the best use of large RWE data sets. In our experience, teams achieve the biggest impact when a biostatistician or epidemiologist is responsible for the statistical veracity of individual causal findings and an ML specialist for predictive accuracy and generalization. Both kinds of specialists must also work closely with clinical experts and data engineers to understand issues involving the quality of data or selection mechanisms. These issues include systematically underreported lab data or other sources of bias.

Practitioners from the two cultures sometimes have conflicting priorities, which RWE leaders must resolve. The best results are achieved by reasoning back from impact, not forward from methods. For a given use case, that means identifying what will generate the impact—for instance, supporting regulatory approval, improving the likelihood of publication, maximizing predictive performance, or optimizing interpretability. Then the team must decide which approach or combination of approaches best suits that goal.

In this context, it is important to distinguish between hypothesis generation and hypothesis validation. Generation focuses on identifying correlations or patterns that represent new avenues of research and can subsequently be confirmed by follow-up clinical trials or other means. Validation uses more conservative methods designed to avoid false discoveries and create new insights. The use of RWE in studies to generate hypotheses can greatly expand the research horizons of pharma companies by including larger and more richly described cohorts. RWE or other observational data can also be used to validate hypotheses, but under stricter conditions and sometimes in combination with RCT data. Both generation and validation can use a combination of AI/ML and biostatistics methods, but generation puts more emphasis on AI/ML, validation on biostatistics methods.

One pharma company built a tool that used insurance claims data to predict risk as accurately as possible at the level of individual patients. Clarity over this shared goal allowed the team to choose a suitable approach, work in an agile way, and avoid arguments about methodology. For example, when a biostatistician expressed a preference for a more explainable but lower-performance model, the team resolved the debate simply by referring to the jointly aligned goal.

Focus teams on scenarios in which AA adds value

Starting with the cases most likely to demonstrate the value of the analytical approach helps to ensure senior management’s support, which is critical to success in platform and process transformations. By focusing on the right cases, pharma companies can deepen their understanding of specific outcomes for patients, help physicians make better decisions, expand a drug’s use across approved indications, and advance scientific knowledge more broadly.

AA methods can help generate evidence in many areas of the therapeutic value chain (Exhibit 1), such as combining RCTs and RWD to optimize the design of trials in R&D. Leading pharma companies already see an impact in four significant areas: head-to-head drug comparisons, benefit and risk assessments for pharmacovigilance, drivers of treatment decisions, and support systems for clinical decisions.

1

Head-to-head drug comparisons

In these studies, companies analyze evidence to establish which patients respond best to a given therapy or which patient subgroups benefit more from a therapy than from its comparators. Head-to-head comparisons can yield valuable insights: for instance, older patients with a specific comorbidity may stand to benefit most from the treatment of interest. Such insights can help healthcare stakeholders make better decisions, inform physicians about treatments likely to deliver the best outcomes, and feed into the design of future clinical trials. Evidence from comparative analyses can also be submitted to regulators to support a treatment’s label expansion, without the need for new clinical trials.2

Advances in explanatory modeling allow companies to apply machine-learning techniques within an appropriate statistical framework for inferring causal effects from RWD. By combining predictive approaches with causal inference in head-to-head observational studies, companies can generate evidence on a wider range of patients than they could with classical methods and explore a fuller set of subpopulations that may experience better outcomes. The use of ML techniques can also improve confounder control and yield more accurate estimations of the effectiveness of treatments than classical methods can.

With advanced analytics, studies can also automatically scale head-to-head comparisons and repeat them across many clinical endpoints, therapies, and patient groups. These analyses can then be presented to the wider business via interactive dashboards or other digital assets not only to provide robust insights into outcomes, treatment differentiation, and unmet needs but also to help teams generate evidence for further validation at scale (Exhibit 2).

2

Benefit and risk assessments for pharmacovigilance

The use of machine-learning algorithms enables companies to screen real-world data continuously for a broad array of potential adverse events and to flag new safety risks (or signals) for a treatment. Once risks are detected, researchers can assess and weigh them against the benefits. Real-world data were used in this way to detect blood clots in a very small percentage of patients receiving the Oxford/AstraZeneca COVID-19 vaccine and in the subsequent benefit/risk assessment of the vaccine by the safety committee of the European Medicines Agency.3 The use of large real-time sources of data on patients enables researchers to capture otherwise unattainable signals for rare conditions and to gain novel insights into the risks and benefits of treatments and thus helps to improve pharmacovigilance processes.

Machine learning also enables companies to screen continuously for a broad array of potential adverse events and to integrate large unstructured data sets that would otherwise be hard to process automatically. For example, algorithms can not only annotate reports of clinical cases with information (such as symptoms, disease status, and severity) that specialists can interpret but also prioritize these reports by relevance. This makes the review of medical cases for potential signals more efficient and detection more rigorous. Finally, as with head-to-head drug comparisons, ML algorithms can generate evidence of causality that can improve the assessment of known safety risks across a greater range of patient profiles.

To create an industrialized benefit/risk platform, algorithms can be embedded into robust data-engineering workflows, scaled up across a broad range of data sets, and processed on regular automated schedules (Exhibit 3). Such platforms give pharmacovigilance teams ready access to automated, standardized, and robust insights into a vast set of potential signals across the asset portfolio.

3

Drivers of treatment decisions

An analysis of RWD can shed light on what motivates patients to switch therapies, adjust the dose, or discontinue treatment and what makes physicians prescribe different therapies or doses. Patients receiving a certain therapy, for example, were found to be more likely to discontinue treatment if they had visited a specialist physician in the three months after starting it. Such valuable insights can be further validated by subjecting potential selection mechanisms to sensitivity analyses. The use of techniques such as inverse-probability-of-censoring weighting helps companies to understand whether patients who drop out of the observation window differ systematically from those who don’t. Physicians can then use these insights to help patients stay on a therapy and obtain the maximum benefit from it.

Predictive modeling can yield deep insights into the complex drivers of treatment decisions and can do so at a larger scale than classical RWE analytics: for instance, the combinations of patient characteristics that predict a treatment decision or the (possibly nonlinear) way an individual patient characteristic affects the likelihood of a decision. With classical techniques, the researcher must often manually encode such relationships into the algorithm. Explainable artificial intelligence (XAI) techniques, such as SHAP, or causal-inference methods, such as Bayesian networks, can provide a detailed view of the drivers of treatment decisions. They can also expose the way differences between groups affect a patient characteristic: for example, advanced age may reduce the risk of treatment switching among patients seen by generalist physicians but increase it for those seen by specialists.

Since companies can use the same predictive-modeling approach for many aspects of the treatment journey, models can be scaled up to scan for drivers across decision points and across the entire patient population for a particular indication. This approach helps generate detailed evidence on the potential drivers of decisions for various treatments, not just a company’s own therapies.

Support systems for clinical decisions

Decision support systems, which are increasingly common in clinical settings, seek to improve medical decisions by providing information on patients and relevant clinical data. These systems are often developed with large real-world data sets and are subject to close regulatory scrutiny. Typically, they rely on algorithms that can, for example, predict (as a function of thousands of patient-level characteristics) the risk that a patient will suffer heart failure during the week after visiting a physician. Such systems help to raise the standard of care, reduce its cost, and improve safety for patients.

Researchers have shown that the accuracy of the predictions clinical decision support systems generate can be improved dramatically by replacing classical statistical models with machine-learning techniques such as bespoke neural-network architectures and tree-based methods. In addition, XAI techniques can be employed (much as treatment decision drivers are) to help physicians interpret and use these tools effectively. So can algorithms designed to be just as interpretable as statistical models, but without sacrificing predictive performance. These algorithms include generalized additive models (GAMs) and their machine-learning counterpart (GA2M), which are gaining a foothold in healthcare.

The predictions and explanations machine-learning algorithms generate can be presented to physicians in an application that supports their clinical decision making (Exhibit 4). To ensure ease of use, companies developing these applications involve user experience designers from an early stage to work closely and iteratively with physicians.

4

Build a technical platform to scale up safely

To progress from experimenting with AA to deploying scaled-up use cases across indications, therapies, and locations, RWE leaders must build a technical platform with five key components. Given the right data environment, tooling, processes, teams, and ethical principles, companies can develop analytical use cases at pace and extend them across their portfolios.

  1. An integrated data environment should replace ad hoc data sets with an industrialized, connected data environment giving users access to a comprehensive collection of data sets through a central data lake. In such an environment, data catalogues are implemented in line with business priorities and a clear map of RWE data sources. Data sets are ingested and then enriched by repeatable data-engineering workflows: for example, regular workflows can be created to build feature representations of patients based on their medical histories, and multiple analytical applications can then use these representations. Linked data sets become proprietary knowledge that anyone in the organization can use. By creating industrialized processes to manage and track frameworks for data access, metadata, data lineage, and compliance, companies help ensure that workflows are robust and repeatable and that all RWE analyses maintain the integrity and transparency of data.
  2. Modern tooling replaces disparate and fragmented computing resources with enterprise-grade distributed scientific and computing tools. To avoid interference with everyday business operations, analytics are typically developed in a lab-like environment separate from other systems. This environment must be flexible and scalable to handle a number of different analytical approaches: for instance, the use of notebook workflows. Data engineers must be able to access scalable resources as and when needed to support data-processing workflows. Similarly, data scientists must have the flexibility to switch between common programming languages (such as Python and R) to use the modeling-software packages they need. Depending on their analytical approach, they may also need access to hardware designed to accelerate deep-learning workflows.
  3. Machine-learning operations (MLOps) replace manual processes for deploying models with automated processes in a factory-style environment where analytics models can be run continuously and at scale. (This helps to reduce development times and to make models more stable.) Automated integration and deployment give users reliable access to analytics solutions. Meanwhile, to manage the performance of models, control risks, and prevent unforeseen detrimental impacts on operations, monitoring processes should regularly check the predictions that algorithms generate. A central platform team with skills in MLOps, software operations (DevOps), and platform design usually builds the factory environment and develops the processes.
  4. Analytics product teams, which replace project teams, focus on mutually reinforcing use cases. They build reusable modular software (or data products) and assemble and configure these elements in line with business needs as they scale up a use case to a new therapeutic area or business unit: for example, a general data model and algorithm built with rich US data sets could be ported to another region with more limited data by selecting only the relevant components. This approach enables companies to scale up the generation of evidence across indications, therapies, and use cases and allows researchers to run hundreds of analyses across multiple patient outcomes and subpopulations.

    A product owner who defines the product’s success criteria and is accountable for realizing short- and midterm objectives leads such a team. Its other members have profiles resembling those of RWE teams. Each product team develops and advances the methodology for its own use case while working together with other teams to capture synergies between data products.

  5. Ethical principles for data science and machine learning ensure that insights are extracted from RWD and other complex data sources safely. Care should be taken to avoid any risk of causing disproportionate harm to underprivileged or historically underrepresented groups. Risk management in AI is a complex and rapidly evolving field; regulators, professional bodies, and thought leaders have developed increasingly sophisticated positions on it.

AI risk management involves more than just the standard data ethics commitments, such as respect for privacy and the secure handling of sensitive data. The issues here include the explainability of models (so that nonspecialists can understand them) and fairness (so that they have comparable error rates or other criteria across, for example, gender or race). An ethically aware mindset recognizes that bias can be present in RWD because of historical disparities. It takes steps to detect and mitigate that bias by using clinical input and care when constructing cohorts and endpoints in RWE, as well as AA techniques that automatically profile the fairness of a predictive algorithm. These measures should be complemented by the use of privacy-respecting technologies, modern data governance with clear accountability and ownership, and explainable methods (rather than black-box algorithms).


When successful companies apply AA to real-world data, they create interdisciplinary teams, focus on use cases that demonstrate value, and build platforms to provide an integrated data environment. Scaled up in this way, advanced RWE analytics helps organizations to make decisions more objective and to shift the focus from products to patients. In this way, it supports the broader goal of delivering the right drug to the right patient at the right time.

Explore a career with us