While Gen AI has taken the world by storm, as is evident by the consumerization of this technology, scepticism is creeping into the highest level of business regarding the ROI of AI investments. A recent study released by the MIT Media Lab/Project NANDA found that 95% of investments in generative AI have produced zero returns.
For its part, in what could be termed as a “watershed moment” in the life of this fledgeling technology, Gartner has said that Gen AI has entered the “Trough of Disillusionment” era, the third step in its five-stage technology adoption framework, which is an indicator of widespread dissatisfaction with a technology. To bolster its claim, Gartner report states that less than 30% of AI leaders have reported that their CEOs are happy with AI investment return.
Given that this breakthrough technology is driving individual productivity, albeit sporadically, but struggling to deliver cohesive P&L level ROI, scientific leaders find themselves at an interesting intersection. They are compelled to balance the promise of AI with a realistic estimate of investment ROI -- a dichotomy that is potentially anxiety-inducing and laced with uncertain business outcomes.
Right up front, there's a lot happening in the market today. From an economic point of view, and considering the impact of tariffs, there's a great deal of business uncertainty. At the same time, many significant AI investments are also being made.
Given this charged atmosphere, what are you hearing from the market? What are top executives telling you when you meet with them? Are you picking up on any signals that business newspapers aren't publishing -- any insights that would be interesting for our readers?
I think right now there's a lot of anxiety and uncertainty, especially in large pharma when it comes to how best to make use of AI. Also, alongside AI induced anxiety, we're also seeing many executives being replaced, specifically at the VP level, with new people coming in. This is causing a lot of turmoil.
When these transitions happen, budgets often get frozen for a bit because the new executives tend to bring in their own people. This creates uncertainty among the levels below them, who are worried about their jobs and the new direction the company will take.
Right now, I feel there's a lot of uncertainty. However, in the last few weeks, I've noticed things are starting to settle down as the new people get into place. It felt like things came to a halt earlier this summer, around May and June, but now they're starting to open up again with the new leadership in place at many companies.
When it comes to technology choices, I mean, if you take any industry, technology is the backbone. We may work in a bank, a lab, or on an airplane, but technology is what drives almost all workflows.
So, regarding the biopharma sector, what is your take, especially given the AI buzz? Who usually holds the power to make technology decisions? Is it the scientific side i.e., the business side, or the IT/Informatics side? Who decides what technology the business should be using?
I think there are different cultural components across biopharma. In some companies, the power is with the business; with others, it's with the IT group; and with some, it’s a mix.
What I see is that successful organizations are those where IT and science work collaboratively. When you "throw things over the wall" -- if IT has an idea that they need to update their technology and they just throw it over the wall -- it will be met with huge resistance. I think organizations that have senior-level people at the top providing a clear direction and vision on how technology will enable them to do their science better with greater efficiency -- for instance, helping them identify targets faster or stratify patients faster -- that's where you have the most success.
So where is the power held? IT may have the power to implement something, but if they can’t convince the scientists to make that change, it's just not going to happen. It's a bit surprising to me, in a sense: some scientists still like to use Excel. Their mentality is, "Why should I change? Give me a reason to change. Why change when I've been doing this for the last 30 years and it’s been working for me?"
In research specifically, there needs to be a compelling "why" for them to change. It's up to the IT department, along with the cultural leaders on the science side, to make that case. The younger generation of new scientists are trained in data science and computational biology, so they are comfortable with AI. But for those who have been in the lab and have been successful for 30 years, there needs to be a whole change management and adoption management process. People will only make a change if they see value in it, and it's up to the entire organization to demonstrate that value.
Okay, that's interesting. So, what are some of the most common, or perhaps surprising, forms of resistance you've been facing lately? This is especially in the context of large digital transformation programs or even with well-established technologies like ELN or LIMS, not just new ones like AI, which is still in its nascent phase. What are some of the surprising forms of resistance?
I think there's a lot of fear about what this means for people. Fear of how it will affect their day-to-day job, and whether they might be out of a job.
I believe if you look downstream on the manufacturing side, that's where most of the resistance will be -- because of all the compliance, GxP, and GMP regulations. When systems like LIMS are in place, they're expected to stay for a decade. It's amazing to me to find that at some of the big pharma companies, their systems are 12 years behind. They don't upgrade because the time and money required for an upgrade in a compliant, validated system is a huge effort. So, they tend to kick the can down the road and don't keep their technology up to date. It's working and validated, they feel secure with it, and there's always the risk: if we update the system, are we going to introduce errors? Will we be able to answer the auditors' questions? Fear is probably the biggest factor.
In the clinical space, it's also about how it's going to impact the integrity of the data, which is a big concern.
I also think that, at least in the many decades I've been in this industry, there have been a lot of hypes. Back in the day, computational chemistry was a huge hype. Everyone thought it was going to solve everything and create all these new drugs, so people jumped on the bandwagon. Then combinatorial chemistry came along, and everyone thought we'd be able to develop new drugs using silicon graphics systems and wouldn't need to do as much lab work. All of these hypes went through a cycle and then fizzled out.
Based on who I talk to, there's a belief that the current hype won't fizzle out completely but will come to a more standardized level where people will see the true value. So, some people are taking a wait-and-see approach. They don't want to be part of the hype. Once it settles down and they can see where the value is and where it can be successfully applied, then they will embrace it and get more involved.
That makes a lot of sense. Companies do change their platforms, even if they'd prefer to stick with them for a decade. While it might happen at a glacial pace, digital transformations are indeed taking place.
When a company decides to replace its LIMS, for example, it's a huge undertaking. They must overcome all the hurdles you mentioned -- regulations, effort, cost, and investment. If they've managed to jump through all those hoops, there must be a very compelling reason for them to do so. What typically pushes them over the edge?
At some point, older systems just won't be able to run on current platforms. Eventually, they will break.
Most companies are now cloud-based, though I still see manufacturing operations that are on-premise. The research side has moved to SaaS; I don't know any research organizations I deal with that haven't moved to the cloud. However, I still see resistance to moving to the cloud in manufacturing.
But with executives seeing the need to break down silos -- because it's all about the data -- we need to have things more centralized so we can use data across R&D, clinical, and manufacturing and make the best use of that data. These silos need to be broken down. So, people are starting to see, and it's coming from the top down, that we need to break them down for a competitive advantage, for speed to market, and so on.
You mentioned data, and since you have spent decades in this industry, I was wondering: since when have people started realizing the importance of it? When I read articles now, I get the sense that all research basically comes down to data. It's so much so that the term "Biotech" has been flipped; now there are "TechBio" startups where tech geeks come in with quantum and physics ideas to find patterns in data, similar to what happened in Wall Street trading couple of decades ago. Historically, when did this trend start?
I think it's been a journey, but it's really picked up speed in the last seven years. That would be my take. Maybe 10 years ago, it was still a theory, but in the last seven years, it has become more actionable. And honestly, in the last three years, it's grown exponentially. People are starting to feel nervous, like "am I missing something?" I went to a forum recently where executives from a big-name pharma said, "We feel so far behind because we're not using agentic AI."
There's so much hype and talk about using AI that executives are saying, "We need to roll this out. We need to make sure every scientist in the organization understands what it can do for them." So, in the last two or three years, I think there has been real pressure from the industry, with companies feeling like they are behind the curve.
Big Pharma went through the patent cliff. Did that play a role? Are there any specific reasons you can think of? Something must have happened about seven years ago that pushed people to realize how important data is.
It's a natural evolution. I think it's also been influenced by the cross-pollination of executives from outside of biopharma who are now entering the industry and bringing a different perspective. They look at what the industry is doing and say, "Wait, you guys are really far behind. We've been doing this for the last decade. Why aren't you?"
The pharmaceutical industry is far behind other sectors like banking or finance in terms of digital transformation. Why is that? I would go back to fear and regulatory issues that have really prevented people from embracing this transformation as quickly as other industries.
But could it also be because finding a drug target is getting that much more difficult, and perhaps all the low-hanging fruit have already been taken? Is that something you hear from executives?
I think yes -- it's gotten harder. I also think the new drugs coming out aren't just based on biology or chemistry alone. They're a combination of new modalities, such as cell and gene therapy or combination therapy. This means there's now a greater need to understand how data science can connect these different, siloed modalities.
There's much more emphasis on how data can bring these things together, which is a shift from a time when a scientist might have focused solely on creating the next small molecule.
You interact with lot of professionals in the industry and exchange a lot of ideas. So, regarding the importance of data, how does the IT or Informatics side see it versus the scientists? Do they perceive it differently? Is there a gap?
Yes, I believe so. Scientists have typically focused narrowly on their own data and on owning it. In contrast, IT's perspective is often, "Let's do an enterprise-wide solution for everybody." But when you do something for everyone, each person feels like it doesn't do exactly what they need for their specific workflow.
I've seen instances where IT just threw a platform over the wall, ripping out of scientists’ hands what they were used to and giving them something that just doesn't work for them.
What they should have done is brought people together so they could be part of the solution. People don't like being told what to use. But when they're involved, they might accept a solution that's not 100% perfect for them because they understand it's for the good of the organization -- so that data can be used not just by them and their department, but by others as well. They need buy-in.
Instead of a two- or three-year transformation that people don't want to be a part of, organizations should look for quick wins to show value. They need to demonstrate that a new system won't replace a person's job. Instead, it will handle mundane tasks so they can focus on more creative, scientific work. You need to sell them on the value it brings and the reason for the change.
What's the mix like? Let's say out of 10 companies you've worked with, what's the ballpark breakdown? In how many did IT hold the power, and in how many did scientists hold the power?
I'd say it's about 50/50. The power dynamic can shift, especially when there are changes in upper-level management, as I mentioned earlier. At times, I've seen the business -- that is, the scientists -- have more power than IT, and at other times, I've seen IT have more power than the business. So, it really depends on the situation.
Margaret, where do you see the industry heading in the next two to three years? What is your reading of the situation, given that you are picking up AI-induced anxiety signals from a section of the industry, and how do you see this playing out?
AI is not going away. I believe that organizations are now in the process of essentially trying to accelerate the deployment of AI and are doing more on education and enablement.
They are also understanding where there could be some quick wins and are getting people brought in and on the bandwagon. I think there is a bit of hype around AI, and that will settle down where it will become a part of everybody's day-to-day routine, and people will embrace it more. I think that's where we are right now. It's here to stay, but we need to make sure that people understand it is not replacing humans. It's making people more efficient and advancing science to ultimately improve human lives.
