Embracing AI 2.0 and modern machine learning as foundational tools to deepening HCP engagement
Driverless cars and doctorless diagnosticians must work flawlessly every time. Likewise, their users must be confident that such artificial intelligence (AI)-driven innovations will work as expected. Adoption of these technologies—and, hence, their tremendous benefits—hinges on trust. It is a catch-22.
As AI continues to show great promise for commercial life sciences, trusting its suggestions is paramount to widespread adoption and growth. In fact, as AI evolves into the more advanced, contextual version that is emerging today, its predictions are becoming increasingly pivotal to treatment decisions. Sales reps, brand teams and medical science liaisons (MSLs) are better equipped than ever to deliver the right information to healthcare providers (HCPs) at the right time-—for instance, highlighting a new therapy with fewer side effects to a recently diagnosed patient. The problem is most users have little visibility into how AI-powered systems make the decisions they do, and this lack of transparency inevitably casts doubt. Many of the algorithms used for machine learning (ML)—particularly the popular deep learning approaches in use—do not transparently convey how and why a decision has been made.
The lack of rationale hampers our ability to let go of misgivings and accept the accuracy of machines whose mechanics we cannot easily see or comprehend. As such, human beings want computer systems to produce
transparent explanations for their decisions—a form of science known as explainable AI (xAI). Fortunately, the most advanced AI solutions in commercial life sciences now incorporate xAI with a blend of other analytics technologies and human insight to improve decision-making by building real-life context and plain-language explanations into recommendations.
As trust between man and machine builds, AI 2.0 will become foundational to deepening HCP engagement in today’s digitally driven commercial model. Here is a look at how next-generation AI builds trust between sophisticated systems and the users they support.
Complexity demands more humanity in modern AI
The life sciences industry is incredibly complex. There are more channels, data, regulations, stakeholders and pricing pressures than nearly any other industry--—making trust both hard-earned and simultaneously worth its weight in gold.
Companies across many industries have become increasingly comfortable leveraging the insights provided by ML algorithms to aid decision-making. For example, GE Power is using advanced analytics and ML to enable predictive maintenance, operational efficiencies and business optimization to move toward its vision of a digital power plant. American Express, which processes $1 trillion in transactions and has 110 million credit cards in operation, relies on ML to help detect fraud in near-real-time, saving millions of dollars in losses. AI is even helping the farming industry, as John Deere uses ML Farmsight system to guide decisions on things like pest-control or what crop to plant when.
While exciting, these AI examples do not have as serious consequences if AI gets it wrong. For the life sciences industry, an errant recommendation could inadvertently prevent potentially life-changing information from getting into the hands of doctors.
In addition to commercial AI’s potential weighty impact, AI in the life sciences industry is particularly complex. Consider a comparison to online retail. Amazon offers personalized recommendations based on browsing and shopping history. Most data reside within Amazon and make very direct connections—for instance, you shopped for this, so you may also like this. In the life sciences, however, there are many more variables to consider, such as the variety and location of relevant data sets, varying HCP interactions across therapeutic areas, data privacy requirements and multi-region brand strategies.
An environment with so many twists and turns—made even more challenging by the COVID-19 pandemic—requires an evolved form of AI if users are going to embrace its full potential and overcome go-to-market complexities. AI 2.0 must feel more human to garner confidence. By incorporating specific feedback from field and brand teams as well as MSLs, AI 2.0 incorporates context from real-world situations to produce recommendations that feel like suggestions from an experienced peer.
“We now see the power of AI, but my early experiences with AI at previous companies were not as successful. Users were hesitant to trust the recommendations as they seemed synthetic,” explained Joel VanderMeulen, senior director of commercial strategy and operations at EMD Serono, a Merck KGaA company. “AI 2.0 combines different analytics technologies like geospatial targeting that knows where a rep will be and provides pertinent suggestions at the right time, so users don’t feel like they are interacting with a robot. It brings the machine to life for a more natural experience they can trust.”
Context ensures hard-earned trust is not easily lost
AI 2.0 leverages the best of both humans, who are good at making decisions in complicated situations, and machines, which excel at data processing but can struggle to make basic judgments, to create more meaningful relationships with HCPs. Early commercial AI offerings were hampered by the one-note nature of their approach. Most solutions fell neatly into one of two buckets: ML or expert systems (an if-then, rules-based approach). Neither technology, by itself, does a great job replicating the way humans really think. Without necessary context, ML technologies can generate conclusions that are not practical in the real world—causing users to doubt their validity. At the same time, it would be impossible to catalog every possible scenario a user might encounter with rules alone—resulting in less accurate conclusions.
Modern AI combines advanced analytics technologies—business logic, ML, explainable AI and optimization—plus human intelligence to transform nascent forms of AI into a trustworthy assistant. Its unique blend of business rules and ML creates real-world guardrails to prevent recommendations that contradict commonsense.
Further, third-party data sets and the insights captured in a company’s customer relationship management application, marketing automation platform and other tools provide a foundation for understanding how HCPs typically interact with a brand.
Ultimately, it is the overlay of human intervention that ensures recommendations are rational and, therefore, more likely to be trusted by end users.
If you explain it, they will use it
AI yields optimal results when every interaction and data point are analyzed together, which is possible with the right blend of analytics technologies. However, explainable AI (xAI) may be the most important analytics technology of all when it comes to building trust from its users.
Explainable AI is a key part of AI 2.0 that is helping life sciences companies drive greater adoption among MSLs, brand teams and sales reps. It is an emerging field in ML that tries to address how decisions are made by looking at the steps involved in decision-making. It hinges on clear articulation of how ML generates their outputs. For example, xAI answers questions like: Why did the AI system make a specific suggestion? Why didn’t it suggest something else instead? It explains how conclusions were reached to improve understanding.
In the life sciences industry, xAI might recommend a sales rep send an email instead of making a virtual visit, supported by the reason that this HCP is most responsive via email. In another example, a suggestion to an MSL to schedule a meeting with an HCP today might come with an explanation that says the reason for the urgency is because the doctor has not been engaged with in 120 days.
Thanks to xAI, ML is no longer a mysterious, black box. It provides transparent explanations for every recommendation to foster enough confidence, so end users trust that AI is making good decisions. It is crucial for continued AI deployment in life sciences and to build upon today’s successes for a more intelligent future.
“Field reps and MSLs who have been interacting with their customers for years tenaciously believe in their own intuition,” said VanderMeulen. “It can be hard to convince them to do something that goes against the grain or change their minds if, say, an AI engine suggests that they call on a customer they don’t think is a good target. It’s a matter of trust, but once reps see the value of AI, they believe.”
AI that evolves evokes ongoing user confidence
Development cycles may be long and predictable in the life sciences, but commercial market factors are not. They are dynamic and unpredictable with competitive threats, regulatory changes and ever-changing HCP preferences. The pandemic has piled on more challenges, as digital outreach from all customer-facing groups, including sales reps, nurse educators, key account managers and MSLs, has dramatically increased, demanding companies find unique ways to cut through the noise. In fact, 58% of HCPs surveyed in August 2020 said they have been spammed by a pharmaceutical company—receiving high volumes of digital content that misses the mark.1
AI 2.0 can help—if commercial teams learn to trust that today’s AI systems are sophisticated enough to navigate dynamic markets. Today, AI 2.0 is no longer making educated guesses because it is constantly learning. As campaign elements succeed or fail, next-generation AI incorporates those learnings to continuously optimize execution. AI that can apply what it learns from one environment to the next is the difference between AI that “thinks” like a human and AI that performs ROTE calculations like a machine.
Additionally, next-generation AI makes multi-faceted assessments at the individual level, analyzing every HCP characteristic—from specialty and patient coverage data to when they graduated medical school—against a large dataset of HCPs to determine communication preferences and likelihood to engage on a one-to-one level. AI 2.0 not only knows when an HCP has historically accepted visits or opened emails, but also predicts the optimal time to engage when a new channel is added to the mix.
“AI innovation that incorporates human context is incredibly valuable,” added VanderMeulen. “Combining primary data, such as unstructured call notes from field teams, with secondary data, such as script writing and other data from market research companies, is the next level. It’s extremely exciting—and, equally as important, it helps to foster trust in the accuracy of its recommendations.”
AI will transform the life sciences commercial model
Physicians’ increasing expectations for personalization coupled with decreasing access have driven a slow-drip of changes to the commercial model in recent years, but the ultimate catalyst has been Covid-19. As one colleague put it, the future just showed up. Even if it is not clear what HCP engagement in a post-pandemic world will look like, the need for commercial transformation could not be more apparent. If AI 2.0 paves the path to transformation, trust in AI ensures we walk it.
Fortunately, the groundwork is well underway. After years of steady, incremental progress, most life sciences commercial teams now have a strong data and analytics infrastructure in place and are already turning on the right digital and non-personal communication channels at scale. Users, such as sales reps at EMD Serono, are seeing the value of AI and quickly building confidence in its recommendations.
As the industry navigates today’s increasingly complex and high-pressure environment, a contextual AI approach ensures that the information HCPs receive is tuned to their specific needs and preferences. AI 2.0’s blend of analytics technologies and human insight digitizes, analyzes and incorporates every piece of data and feedback from the field—from prescribing habits and patient demographics to research interests and event attendance-—to personalize the customer experience. Now, with xAI, users can finally trust its output to expand AI’s adoption across the organization and industry.
About the author
Derek Choy is president, Aktana
Reference
1. Accenture, Research Report, “Is Covid-19 Altering How Pharma Engages with Reps?” [Published online August 4, 2020]