How Is Generative AI Transforming Clinical Trial Work?

Illustration of robot hand holding pill bottle

Robot hand with pill bottle

/ Taylor Tieden for BioSpace

Generative AI could enhance and accelerate the way people work on clinical trials. In this Q&A, a management consultant shares his insights on benefits, risks and more.

If you ask Rune Bergendorff how a company knows if its clinical trial can benefit from generative artificial intelligence, his response is short and to the point.

“I think all clinical trials can benefit from it,” he told BioSpace.

Bergendorff’s response is an informed one. As a partner at Implement Consulting Group, a management consultancy based in Denmark, he has helped pharmaceutical companies throughout Europe and the United States explore how generative AI solutions like ChatGPT and Microsoft Copilot can improve their processes.

In June, he partnered with Genmab to present on adopting design thinking and generative AI into life science project management at the DIA Global Annual Meeting. Practical applications for clinical trials included using generative AI to:

  • Find other trials recruiting for the same population
  • Reduce costs
  • Identify risks
  • Improve chances for success
  • Reach endpoints faster

In this Q&A, Bergendorff chatted with BioSpace about using generative AI in clinical trials, covering its benefits, risks and more. The interview has been edited for length and clarity.

Photo of Rune Bergendorff buttoning suit jacket

Rune Bergendorff, Implement Consulting Group

/ Photo courtesy of Rune Bergendorff

Q: How can generative AI improve the efficiency and accuracy of clinical trial design?

A: I think the possibilities are endless. In our work, the hard part was to dissect the processes and investigate exactly where in the process GenAI is beneficial, and then just as important, how do you actually get the help you need? What is the right way of prompting to make sure the prompt is accurate, unambiguous and to the point so you get the right response?

Take the examples that we brought forward at DIA, the points and cases that you touch upon in any clinical trial. When we need to recruit, for example, I don’t think if you said, “In 30 seconds, I can enable you to see what are the trials in the world that are recruiting in the same population as yours,” anyone would say no. If I asked, “Should I evaluate your patient inclusion criteria? Give me 10 seconds,” everyone would say yes.

Q: How do people ask generative AI the right questions so they can get the right response?

A: I would suggest training both in identifying good use cases based on existing ways of working and then light training in prompt engineering.

Prompt engineering is an important discipline, and a lot of companies can benefit from doing prompt libraries to share amongst teams, storing good proven prompts so people can reuse them and get high reliability every time. Prompt libraries are a built-in feature if you have a private instance of GenAI running in your company.

Build high-quality prompts so you don’t need to sit there with a blank screen and think about “Ooh, how do I actually do this prompt?” Some of them become fairly lengthy because you need to give them context. You need to describe what you want to achieve, and is it a phase two trial? Is it a phase one trial? What kind of company are you? There’s a lot of context that you need to give it, and the more precise you are and the better you do it, the better the response.

Q: Are there any limitations or risks involved in using generative AI in clinical trials?

A: One big risk is privacy and security. As we are talking about clinical trials, don’t ever post your information in the publicly available ChatGPT. You need a private version of that. I think that’s the most important one. It’s also the simplest one to fix, because it’s just setting up your own instance. That could be done pretty quickly.

Generative AI image of hooked syringe

Generative AI image of hooked syringe

/ Courtesy of Rune Bergendorff

Then, I think a risk is it’s decoupled from reality. Before the DIA session, I asked ChatGPT to give me an accurate review of our session, even though it would be seven months out. I got a really nice article, but of course it was not true because the session hadn’t taken place yet.

The syringe image is my favorite example. I asked GenAI for a syringe. In this case, it created a small hook at the end because all of the images it has seen most likely have a little drop, and then it interpreted that drop incorrectly and created a hook instead.

There’s also an issue of bias, as seen in the pictures of leaders. We asked GenAI to create a beautiful image of leadership, and we did that nine times. On the ninth time, it created a lion. And in none of the pictures is there a woman. The algorithm has been updated since, so you will get a different response right now, but it proves that that there will be biases. GenAI is trained on the world, and in the communication that we have openly available, there is a certain bias.

Generative AI images of leadership

Generative AI images of leadership

/ Courtesy of Rune Bergendorff

Ethics is also a big issue. Is it sound to apply this technology and not use humans? It’s the classic “Oh, it’s a robot, so it will take over the world,” and that argument doesn’t work, in my opinion. But we need to recognize the concern and the ethical aspect of that. My response there would be it’s a matter of qualifying and changing the ways of working. I don’t want to spend a week identifying risks if I can get them in 10 seconds. So no, it doesn’t harm me. It actually helps me.

Then, you can discuss some of the data. Is it ethically sound to get that into that machine? We don’t fully understand how it works. So, you could say if you have a private version of ChatGPT, you fence it in, and you can always pull the plug if you don’t want to continue. But in all honesty, if you have that private instance and you put in your deepest secrets, we don’t fully know what’s happening. And we don’t know why it improves. We don’t know why it gets better and why it remembers all of this. So, we cannot completely explain it, and is that ethically correct?

Q: What infrastructure and expertise are required to effectively implement generative AI in clinical trials?

A: A lot of advanced solutions can be implemented, but I think we need to learn to walk before we run a marathon. At this time, it is more than enough to implement a company-specific ChatGPT to work with private data and then invest a lot of time in scrutinizing the ways of working and how to apply it.

It’s not the technology part which is the hard part. It’s more the human part of actually working with people to say, “OK, how can you then use it?” I know a lot of companies who’ve gotten Copilot, for instance, and then they’re like, “OK, now I have this colorful logo. What do I do?”

Q: What’s the importance of the human element in generative AI?

A: You need to verify it. GenAI is an accelerator, and it’s a means of extending your own capabilities. It’s difficult for you and I to comprehend a full protocol and spot that due to this and that, then this could be a risk. But when we are presented with that risk, it’s quite easy for us to say, “Hmm, that’s actually true. I didn’t get that, but it’s true.” And on the other hand, it’s also quite easy to say, “That’s just far out.” But at the end of the day, it’s still your responsibility—our responsibility—to evaluate what we get.

Interested in more career insights? Subscribe to Career Insider to receive our quarterly life sciences job market reports, career advice and more.

Angela Gabriel is content manager at BioSpace. She covers the biopharma job market, job trends and career advice, and produces client content. You can reach her at angela.gabriel@biospace.com and follow her on LinkedIn.
MORE ON THIS TOPIC