Loader

Interview: AI by Itself Isn’t Going to Magically Deliver a Drug Today

author image

Sakthi Prasad T, Content Director   |   7mins

In a recent survey by Zifo Technologies on Data Readiness for AI, it was found that only about one-third of scientists and informaticians feel confident in leveraging scientific data for AI initiatives, highlighting foundational data challenges across biopharma, which makes AI adoption that much more difficult in due course, if not addressed soon.

Media_blog

AI gained prominence and popularity after the introduction of OpenAI’s ChatGPT: ordinary consumers got to experience the GenAI technology first hand, creating unprecedented buzz and excitement. At the same time, one should remember that AI/ML has been around for a long time. In fact, the Finance industry is much ahead of all other industries when it comes to gainful deployment of latest technologies. In the U.S. stock market, about 70% of trading volume is initiated through algorithmic trading, which is a forerunner for AI trading. According to Greenwich Coalition study, about a quarter of buy-side equity traders surveyed plan on incorporating internal AI technologies into their trade execution workflow in the coming year.

In this fast-evolving scenario, to gain a deep understanding of the current state of new technology adoption in the Biopharma sector, we spoke to Sujeegar Jeevanandam, Zifo's Principal Consultant, Data and AI, about its real-world impact, moving beyond the glamorous headlines of drug discovery.

By its very nature, the biopharma industry avoids risk. It must, because of strict regulations, complex manufacturing, and data privacy. You don't see the kind of rapid tech adoption that Wall Street saw with algorithmic trading, which now uses such technologies to trade trillions.

So where does that leave biopharma with AI? Based on your interaction with various industry stakeholders, are they genuinely enthusiastic and making it a priority, or is their cautious nature creating a more lukewarm response? What's really driving their approach?

Firstly, I don’t subscribe to the idea that biopharma is "risk averse."  In my view, the industry thrives on risk. The sheer number of clinical trials that launch each year -- and the high failure rate they accept -- is proof of this. You never start a trial with a guarantee of success; it’s inherently a betting game.

Yeah, I take your point, maybe I should have qualified it as risk averse when it comes to embracing new technologies.

Correct. The industry tries to be cautious and control the journey they go through to take a drug to market. That's true whether it's with the talent they acquire, the partners they choose, the technologies they adopt, and, most importantly, the processes they follow. The idea is, "I am already taking such a high risk, so I'll go with a strategy that can mitigate some of those risks."

So, from a pure technology point of view, the industry can be seen as a laggard compared to other industries, like finance, as you mentioned. But when you think from a larger perspective, if you take a risk in one area, you try to mitigate it in another. Right?

Now let me take up the second part of your question: the biggest driving factor, I feel, influencing the adoption of AI is FOMO -- everybody feels they will miss out on this wave if they don't get involved now. Compared to how the industry adopted technologies like electronic lab notebooks (ELN), LIMS, cloud, and other technical advancements, I feel the pharma industry is taking a very driven approach to embrace AI. The ultimate question is going to be: how well will you adopt AI?

Is it hype? I mean, if I ask you, is AI just hype, or are there genuine use cases for it?

It's not hype. It has the potential to significantly reduce the cost of developing and bringing a new drug to market. However, AI by itself isn't going to magically deliver a drug today. Everyone wants that -- everyone wants to ask a question and have AI say, "Here's a drug; it's going to cure." That's not what I believe AI is capable of right now.

But the opportunities for AI to create an impact in a localized manner throughout the drug's 10-year lifecycle in the industry are tremendous. If you can even create a 5% impact at various stages of the pipeline, it's going to translate into millions, if not billions, in savings. It will also reduce the cycle time to bring a new drug to market, and it has the potential to significantly reduce challenges and adverse events that can happen once a drug goes to market.

Many companies already have IT systems in place, and AI essentially sits on top of them. You can use it to harvest data from various instruments and systems to reveal patterns or insights that scientists can study. So, AI's primary use is to extract these insights from existing data.

This brings us to the relationship between basic IT plumbing and AI. Think of it like algebra versus calculus in building a house. Algebra represents the fundamental building blocks -- the bricks, cement, and tiles. You can't build a house without these basic materials. Calculus, on the other hand, is like the design phase: deciding where rooms go, how to arrange furniture, and so on.

Many people try to jump straight to calculus without mastering algebra first, finding it difficult. That's because algebra provides the foundational building blocks applied in real-world situations by calculus.

Similarly, you can't do AI without basic IT plumbing. The question is, where do we draw the line? In what situations will basic IT plumbing solve the problem, and when do you need AI? For some situations, just like algebra is enough for certain problems, basic IT plumbing will suffice; you don't need to move to the "calculus" level of AI.

I'm going to use a different example, if that's fine. I will use the analogy of a laptop and the program that we do in a laptop. To me, the LLM is the CPU. Billions of people use laptops. Most of us don’t have to understand how a CPU works. They all, I hope, know that a CPU is critical for a laptop or a machine, but they have no knowledge about how a CPU functions. They still use the laptop very effectively, right?

When we talk about Gen AI in general, the work that all the tech leaders are performing today -- OpenAI, Perplexity, and others -- they are building a laptop for the industry to use, on top of the LLM, which is the CPU.

So, if I'm a team leader who wants to leverage AI, I don't need my team to understand the LLMs. I need my team to be capable of working with a laptop, right? So, what it means is LLM is yet another technology that is available for knowledge workers to perform their business function. So, in a way, yes, IT Systems and Data are the “Algebra” while AI application is the “Calculus”.

I used to start programming in BASIC and Mainframe, and when a new technology like Python and C# evolved, I embraced it. I learned it. The tools changed, but the fundamental problems didn’t change -- I was still solving the same scientific problem for my customers. This means AI will eventually fold into this term that you called IT plumbing.

Okay, in that case, scientists don’t need to be an expert in AI or LLM, right?

I don't think scientists have to go out of their way to become an expert in AI or LLMs or any technology to be able to use it. If I know how to use a laptop -- if I understand the purpose of the laptop, keyboards, mouse, and so on -- I can do my job effectively without having to know the specifics. And that's what the industry and society are moving towards: making it easy for consumption. If you make AI easy for consumption, more people will consume it. This means a scientist doesn't have to worry about whether it's too tech-heavy for them to leverage. I don't think that's what customers, or any individual users, should worry about.

When we talk about AI in biopharma, most people discuss discovery, specifically the drug discovery phase. Of course, that's the most glamorous part, where you analyze data and find a cure for a disease -- that's the most often cited use case. However, I'm looking for some non-glamorous areas in biopharma where AI can be equally and potently deployed. Can you give two or three specific or general examples?

The most non-glamorous example I can think of is simply getting answers to questions. We document, capture, and record every aspect of research. Sometimes, we have a question and know the answer lies in a specific document or experiment record. Today, you might spend hours trying to find that record. You get tired and decide to just redo the entire experiment to generate a new copy of the data for your reporting. With AI, you could get those answers very quickly or at least be pointed to the exact location where the answer resides. I cannot overstate how much in savings we'll realize by being able to find a fact that we know already exists.

Beyond that, there are numerous opportunities to optimize operational activities, particularly with the help of Gen AI combined with digital twin capabilities. Routine activities a scientist or analyst performs in the lab can be automated through AI. A good example is: I'm a scientist running an experiment. I observe something that I need to document. Today, I can't stop the experiment to do the documentation; I just hope I'll remember and diligently capture that observation at the end. But with AI, I could ask ChatGPT or Copilot to make a note of that observation through a voice command. The impact of being able to capture these observations, which could become incredibly valuable when developing new models and finding answers, is tremendous.

Okay, so AI is not just at the large, glamorous end of discovery. It can be surgically deployed across the process, across the value chain, basically wherever there is a pain point in the entire R&D and manufacturing value chain. Are you saying that it could be surgically deployed to solve a particular pain point, and then one can be imaginative and come up with a use case to solve a problem?

Absolutely. In fact, viewing the impact of AI only through the glamorous aspects will result in a lot of pain and disappointment. I do not believe that even if you put together all the knowledge from all the biotech and pharma companies in the world, we would have the information to accurately map human biology. This is why every clinical trial is a risk; it's a lottery where you hope all the other pieces that you have not thought of would click into place.

And it is going to take quite a while, in my opinion, to be able to fully model human biology, to be able to say that "here's the question, here's the answer, and the answer is the cure." I truly believe that is where the industry should be going. You have to leverage AI to completely model human biology. But we are not going to get that today. I don't even think we are going to get there in the next five years or so, maybe much longer after that.

Beyond speeding up the trial (time as a benefit) or reducing its cost (cost as a benefit) -- because everyone cites time and cost as the two distinct benefits for using AI -- what other benefits could you think of from AI?

The third dimension, I'd say, is knowledge. Today, your knowledge space is limited by the talent you have. This can result in focusing on a specific indication or pathway or sticking to traditional manufacturing processes to make a new drug, which falls under the CMC domain. The power of AI is that when used correctly, your knowledge space can grow manyfold. You could tap into potential areas that you are currently completely blind to.

For example, let's look at translational sciences, where we generate a huge volume of multi-modal data. Target identification and target validation are two core milestones at the beginning of a drug's life cycle. Any pharma or biotech company has a specific indication or targets on which they have a huge knowledge base, and they continue to exploit that because of their in depth understanding. Without AI, they might not even attempt to step out of this comfort zone to look at possibilities like the impact an approved drug could have on a totally novel target or an indication they aren't focusing on. Why would they? They have limited resources and talent. They operate in a space where they believe they have the best chances of developing a new cure or drug. With AI, however, you can explore new targets and new indications with relatively minimal effort and time

This expansion of knowledge, this opportunity to play on a much larger ground, comes at a cost. You need good quality data to be able to expand your knowledge. To me, compared to the cost and time benefits, knowledge benefits demand larger data capital. As we all know, that's a space where a lot of improvements driven by FAIR are happening and still need to happen.

How about privacy concerns? Because it came up in the latest Zifo survey as well. We conducted a survey, and in it, privacy was mentioned as the top concern for why someone might not adopt AI, given that these are all third-party tools.

The concerns around privacy or security of data are no different from those brought upon when the industry was trying to adopt the cloud. I believe the industry has learned a lot about how we can protect IP and secure data, even if the data is stored or processed by a third party.

Naturally, everyone has doubts about how various players are securing and isolating one customer's data from another, but I do not believe that is going to limit the adoption of AI by the pharma biotech industry. They are doing it already; they are using ChatGPT Enterprise version, for example, leveraging the cloud.

Many customers have their own implementation of these models on-premise or in their own private cloud, to be sure. However, we already see customers leveraging models hosted by third-party providers the same way they are leveraging cloud services hosted by AWS and Google Cloud.

So, what does success look like? I know these are still very early days, but let's say a pharma company gets on the AI bandwagon -- putting aside the fact that I know people will say, "Okay, we will discover that success" -- let's take that glamorous part out of the equation.

From a non-glamorous perspective, how can they enumerate success in the sense that they have created value? What kind of KPIs can they show for it -- because they're making investments that would require ROI. As you said, they will have to invest in data, data capture, and all of that is an investment.

What would a truly successful integration of AI look like, where the business value is shown, and that does not necessarily mean faster drug discovery? So, if you take that out of the equation, what would success look like?

As a life sciences consultant, I see AI becoming a norm in the research and development process, the way ELN and LIMS have become the norm. If anyone isn't using ELN and LIMS, they're an outlier; they're seen as a laggard.

When the industry treats AI adoption with the same lens, it means AI has become core to life sciences research . However, there are still miles to go because interestingly, even today, believe it or not, there are multiple pharma and biotech companies that still run on paper. They have hundreds, if not thousands, of scientists documenting everything on paper, not even using ELN.

You mentioned ELN, how about LIMS?

I would say a lack of a proper LIMS means you're sending a lot of instructions through emails, while depending on handwritten notes and Excel sheets. This shows how slow the industry has been in the past when it comes to technology adoption.

ELNs first rolled out at the beginning of the millenium. Twenty-five years later, while some people still use paper , I observe significant adoption of AI in the industry and among the customers I have worked with. So, I do not believe the industry is going to take 25 years to adopt AI. That's why I do not believe the risk-averse nature of the industry is impacting or hindering the inclusion of AI into research and development.

So, you're saying, from what you have observed, the adoption of AI is faster, or rather robust than adoption of say, ELN or LIMS?

Yes, that’s correct. The rate of AI adoption is faster than ELN or LIMS adoption, in my opinion.

Everyone understands the importance of AI -- nobody's disputing it. But if you take any large biopharma company, leaving aside the startups, if you take the large ones, that's where the complexity lies. So, who's responsible for this AI strategy? Because they'll be working on different targets, different projects, different teams, different systems, geographies, languages, and whatnot.

So, who's responsible for coming up with an overall AI strategy and then also deciding when, where, and how to use AI? Do you see any such thing happening in the industry currently? Or is it all being done individually by the teams themselves, with no overarching, centrally driven mandate from top management saying, "Okay, use AI here and here"? Or is it more decentralized?

I'm going to give you a very boring answer: everyone is responsible for incorporating AI into their operations.

However, from what I have seen with my customers and across the industry, there is a top-down approach. Organizations are setting up specific functional units to drive and incorporate AI across the enterprise. These units are reporting directly to the Chief Scientific Officer (CSO) or even the CEO to ensure there is an enterprise-wide strategy for AI incorporation.

But that kind of a top-down approach takes a long time to reach the "leaf nodes," and the impact is also not very tangible. Some of the most innovative pharma and biotechs are also taking a bottom-up approach, enabling individual teams to be able to fast-track value realization through AI and not have to wait for the entire organization to embrace it.

This kind of "pocket approach" has its own challenges. We talk about privacy, security, the cost, and how you define success. But when done the right way, these pocket exercises bring increased momentum to the organization’s drive towards AI.

Ultimately, I think the boring answer is that everyone, from IT to business to scientists to the CEO, they all have to embrace AI, find ways to integrate it into their workflow. It means asking questions, seeking help, or simply starting to adopt it.