The FDA’s guidance on AI in drug development points to potentially life-threatening consequences of the technology, highlighting the importance of providing the regulator with detailed information regarding models’ development and maintenance.
The FDA on Monday dropped its first-ever draft guidance for the use of AI in the drug development process. Specifically, the document covers the use of the technology to generate safety, efficacy and quality data to support regulatory decision-making.
In its 23-page document, the regulator proposes a “risk-based credibility assessment framework” for using AI in the drug product life cycle. Among the considerations included in this framework are the question of interest, context of use of the AI model, the risk of the model itself and the credibility of the AI’s output within its specific context of deployment.
The FDA provides a hypothetical example: A company is advancing a novel drug candidate that is known to be linked to a life-threatening side effect, and the drug sponsor proposes to use an AI model to stratify patients into subgroups according to their risk of these adverse events.
Those who the model deems low risk will be sent home for outpatient monitoring, while high risk patients will be admitted for inpatient surveillance.
“In this example, model influence would likely be estimated to be high because the AI model will be the sole determinant of which type of patient monitoring a participant undergoes,” according to the FDA, which added that the “decision consequence is also high.” If a high risk patient is mistakenly placed by the model into the outpatient category, then that patient “could have a potentially life-threatening adverse reaction in a setting where the participant may not receive proper treatment.”
Because the output of AI models can be highly consequential—not only for the regulatory decision-making process but also for patient outcomes—drug sponsors seeking to use AI in their development process should provide the FDA with a thorough description of the model, rationale or its use, its limitations and details of its development, evaluation and maintenance.
Importantly, the FDA recommends that drug developers who intend to use AI in their processes reach out early and engage the regulator in a timely manner to “set expectations regarding the appropriate credibility assessment activities” for the model. Involving the regulator early in the process will also help the drug sponsor identify potential challenges and roadblocks for their proposed use of AI.
The FDA will accept comments on its draft AI document within 90 days. It has already received several suggestions from industry players in connection with an AI discussion paper published in mid-2023.
Aside from the AI guidance, the FDA on Monday also released recommendations regarding accelerated approvals and the use of tissue biopsies in clinical trials.