Tempting as it may be to turn to full automation to meet burdensome requirements, the potential for hallucination and other issues means biopharma companies must proceed with caution.
The current excitement around generative artificial intelligence transcends industries and sectors. Within the biopharmaceutical space, this has resulted in an overall push for the adoption of tools and processes that leverage GenAI capabilities. And it’s no surprise. Research suggests that AI could drive time and cost savings of between 25% and 50% during the preclinical stage of drug development, accelerating time to market. Additionally, on the regulatory submission side, GenAI has the potential to support preparation and focused review of product applications and variation documents, with an estimated 40% improved productivity of high-frequency tasks.
At first glance, it could be assumed that, especially for regulatory activities, GenAI is a one-size-fits-all solution for enhanced automation, a “golden hammer” that can solve any problem. However, not every issue is a nail. Though the discussion around AI applications and use cases continues to unfold, there are several instances where regulatory activities within companies of a certain size and structure could currently be better served by technologies or services other than GenAI.
The Problem of Limited Training Data
Global regulatory complexity is advancing at an ever-increasing rate, challenging organizations to find ways to manage the go-to-market activities of products in a cost-effective, timely manner. Regulations, standards and market access considerations can vary by country as well as by product type, and the continued growth of regulations adds even more pressure for companies to maintain compliance.
For example, there are over 30,000 distinct types of pharmaceutical products, and when paired with specific country standards, the subsequent datasets are massive. The case is the same for medical devices: with over 500,000 different types, navigating the variability that comes with various products seeking approvals in countries with differing regulations and standards is an increasingly difficult task. From speaking with industry professionals in my capacity as a senior director at IQVIA, it is clear this perfect storm is leading many to search for tools that can optimize the efficiency of regulatory professionals, eliminate routine manual tasks, increase process predictability and provide deeper insights at a rate beyond human capabilities.
However, it is critical to evaluate the type of GenAI applications prior to any integration into regulatory operations. With large language models (LLM), the key word is “large.” As the volume of data available to train a model decreases, the potential for inaccuracies within the generated output increases, which can lead to a greater number of “hallucinations”—created content that at first glance may appear genuine but is, in fact, fictitious.
Where GenAI is used in a publicly available domain, there is a significant volume of data that can be used to train and validate the AI solutions. However, as we move the focus into the life sciences, then into a subset of that massive industry and further into specific companies with products of a given type that have information that is proprietary in nature, the breadth of available information to train GenAI solutions is significantly reduced in volume.
Additionally, when we look at the rate of change of regulations and consider the potential use of GenAI to help draft global product launch requirements or compile documentation needed for regulatory submissions in multiple countries, it is critical that the drafted AI content meets current regulations and standards. It is for this reason that many companies look to augment existing human professionals with GenAI (and indeed other AI) technologies rather than allowing GenAI to operate immediately without oversight and control.
Risks of Using GenAI for Regulatory Compliance
In addition to the risk of producing hallucinations, it should also be noted that GenAI is not without bias, as it has been trained on a set of information that a small subset of humans believe is desirable. This can lead to inaccurate data or inappropriately generated content. For example, the inclusion of data on product performance and clinical efficacy that underrepresents ethnic diversity or has gender bias could have significant patient safety implications if this content were approved without challenge.
Mitigating against such bias and ensuring feedback for continuous development of the algorithms comes at a cost and, for some companies, that cost may currently be prohibitive. Furthermore, using flawed content to create outputs such as registration submissions and safety reports can lead to reputation damage with regulators, inappropriate outcomes and potential risks to patient safety.
The risk of data leakage should also be taken into consideration when evaluating GenAI tools. Once information is put into the system, the question of who owns the inputs and outputs is critical to ensure control of proprietary information. Controlling ownership by training AI tools on company-specific data (likely limited in size) as well as relevant publicly available data comes at a financial cost that not all companies can bear while GenAI solutions are in their infancy.
Responsible Use of GenAI
Nevertheless, there are certain situations where GenAI can be applied productively. For instance, with a mass-produced product that has been on the market for decades, there is an established risk profile with known contraindications and product performance parameters. In this context, where the data and information behind the product are available and validated, GenAI can be extremely beneficial in the reporting of safety events for the product. This allows industry professionals to focus their time on safety events for products where data is limited, such as when the product is new or has undergone a recent reformulation. In these cases, the risk of relying fully on GenAI solutions outweighs the benefit.
As the use cases for GenAI evolve, some element of human oversight is still necessary to mitigate risks, especially within regulatory processes. The AI-augmented user can take the best of AI technologies and combine this with human expertise to ensure that the provision of safe and effective products is maintained. In some cases, companies may decide that financial limitations, risk of data bias or hallucinations and applicability of AI tools to their company processes may require them to stay with more established technologies whilst the industry grapples with the application of GenAI in the biopharma and adjacent sectors.
The discussion around the use of GenAI within regulatory operations and the overall life sciences industry is ongoing, and it is critical to approach the next generation of automation with both excitement and caution. Ultimately, while the AI advancements are exciting, the focus needs to remain on how the technology can be used in an optimized and efficient way to ensure the safe and effective provision of healthcare products to global markets.
Mike King is the senior director of product & strategy, technology solutions at IQVIA. He has around 20 years of experience leading local and global teams in regulatory affairs and quality assurance and has worked across multiple sectors of the healthcare industry.