ĢƵAPP

Skip to content

A Regulatory Framework for Integrating AI into Drug Development

Collaboration among industry experts and regulatory agencies on a clear, risk-based regulatory approach and validation framework for artificial intelligence (AI) will ensure that AI enhances the drug development process to enable more effective and safer medicines.

As a leader in creating innovative life science platform companies that leverage cutting edge artificial intelligence (AI), ĢƵAPP applauds the U.S. Food & Drug Administration (FDA) for holding the August 6th public workshop on the use of AI in drug development. The responsible use of AI can greatly enhance the safety, speed and efficiency of drug development and create new medicines that serve millions of patients with serious unmet medical needs.

To encourage such innovation while fulfilling the FDA’s mission to protect public health and ensure the safety and efficacy of drugs, the FDA should now work expeditiously with stakeholders to develop and finalize a risk-based regulatory framework governing the use of AI in drug development that is predictable and transparent for both drug developers and the public. This framework should, at its core, be based on how specific uses of AI alter the risk-benefit calculation for patients who may receive a drug that is either in clinical trials or has received regulatory approval. Further, to ensure the trustworthiness of AI model outcomes, there is a need for a model validation structure that is based on objective criteria that ensures accuracy and reproducibility within a defined scope for each use case.

Timely development of a risk-based regulatory framework

Clear guidelines that are developed expeditiously are essential for industry stakeholders to understand the level of FDA engagement required for various AI implementations across different stages of drug development. These guidelines should be delineated within a risk-based regulatory system that recognizes that not all AI uses in drug discovery and development necessitate unique oversight by the FDA. For example, AI systems involved in early-stage activities, such as biological target identification and molecule design and optimization, serve to accelerate drug discovery and development processes without altering the traditional development of lead drug candidates with respect to safety and efficacy testing.

In contrast, applications with potential patient impacts will require industry-FDA engagement, based on a nuanced, balanced regulatory approach. The FDA should provide clear guidance on AI use cases necessitating different levels of agency engagement, classifying them by their impact on efficacy and patient safety. For example, lower-risk applications like AI-driven patient stratification in clinical trials, which categorize patients to enhance trial outcomes without increasing risk, may warrant less stringent FDA oversight. In such cases, the level of FDA oversight of the AI model outputs may be less rigorous.

Conversely, medium-to-higher risk AI applications that have direct implications for patient safety or may negatively alter the risk-benefit assessment will likely necessitate more rigorous FDA oversight and consultations. For example, relying on AI models to predict human toxicity rather than using more traditional (though often less predictive) animal models, or using an AI model to serve as a clinical trial control group instead of a placebo-controlled trial arm, raise important regulatory questions. In these higher-risk cases, we would continue to draw on existing guidance and current best practices, but we also expect that there will be some level of validations and transparency required by the regulators before these technologies can be deployed to generate clinical evidence that is acceptable for regulatory review.

In addition to assessing risk categories based on patient impact, the FDA also should consider other factors when determining the appropriate level of oversight. For example, a risk-based framework should account for differences in risk profile between the various types of AI models being used. Traditional bioinformatics and machine learning models likely present different risks than more generative AI models. And the FDA should consider whether AI is the sole source of evidence being used to support a certain approach or finding versus those instances in which AI is being used to augment existing metrics and processes.

Categorizing AI applications into risk-based tiers ensures that regulatory requirements align with the significance of potential impacts on patient safety and drug efficacy. Leveraging risk-based tiers is also not new to FDA or industry, and thus can be easily explained and be accepted by different stakeholders. Such a framework promotes continued innovation in AI/ML systems by specifying when and what type of data submissions and consultations are necessary. Importantly, the framework should enable early consultation and decision-making where needed, so that innovative drug developers can develop their investigational new drug (IND) packages with confidence that individual FDA reviewers will not demand different types of data or information on AI models later in the process. This structured approach will avoid regulatory bottlenecks and help balance the responsibilities of the FDA and drug developers, ultimately contributing to the safe and efficient advancement of biotechnological innovations utilizing AI.

A validation framework to ensure output reliability

A key question that regulators need to address is to what degree and type of validation will be necessary to promote sufficient levels of trustworthiness and reliability of AI models used in drug development. A robust validation framework for AI models applied to drug development should be based on objective criteria and tailored based on the specific needs of the delineated risk categories.

The validation requirements should focus on reproducibility and accuracy through controlled experiments using independent datasets. These validation protocols should demonstrate the model's reliability without necessarily requiring an exhaustive explanation of the model's “working mechanisms.” Indeed, it may often be the case, especially for generative AI models, that comprehensive explainability is not always possible. Instead, the emphasis should be on the results and their consistency for specific use cases in real-world applications.

The responsibility for model validation should lie primarily and initially with drug developers, guided by frameworks established by the FDA and/or other relevant bodies. Drug developers must clearly define the AI model’s use case — such as drug safety profile assessment — and delineate the scope of the validation process. This ensures that the evidence generated meets regulatory standards, offering assurance to both regulators and stakeholders. This framework should apply pragmatic and risk-based approaches that protect the disclosure of proprietary model architecture and datasets and provide the flexibility necessary to innovate.

The regulators' role is to then review the evidence submitted by sponsors to ensure that it is adequate for validation purposes. If the FDA lacks resources or expertise to perform this oversight systematically, third-party validation review could be an alternative. However, it is critical that the FDA have some centralized, internal expertise with deep understanding and command of AI and its applications in biotech. This foundation will allow the agency and sponsors to gain cumulative understanding of model validity as more and more experiments are done over time, therefore reducing oversight requirements in the future.

A path toward positive patient impact

While clear guidance for drug developers and other stakeholders will be essential as described above, the surest path towards leveraging the promise of AI in accelerating drug development for patients is by ensuring opportunities for sponsors to meet early and as needed with FDA’s experts to promote alignment on AI use and FDA data and oversight requirements in the context of particular development programs. This will provide predictability and transparency, and will help avoid potential regulatory bottlenecks later in the process.

In addition, given that technological advances will continue to accelerate in this area, effective AI regulation will be contingent on ongoing engagement between the FDA and industry. Workshops, stakeholder meetings, and other opportunities for continuing open dialogue provide crucial platforms for exchange of expertise that transcend published guidance. In the context of rapidly advancing technology, such engagements ensure regulations remain pertinent, safeguarding the pace of innovation toward patient benefit.

As the FDA and relevant stakeholders move forward together to craft a responsible regulatory framework for the use of AI in drug development, the focus must be on helping drug developers and the FDA carry out their “dual mandate” — to promote innovation on behalf of patients in need, while also ensuring that novel medicines pass a rigorous risk-benefit assessment. AI is a powerful tool that if regulated accordingly will enable both industry and regulators to more effectively and efficiently accomplish this shared mission.

Essay by

Armen Mkrtchyan

Armen Mkrtchyan is an Origination Partner at ĢƵAPP and leads Pioneering Intelligence, an initiative to institutionalize and expand the use of Artificial Intelligence (AI) across Flagship and portfolio companies, and to help drive…

Tom DiLenge

Tom DiLenge joined ĢƵAPP in 2022 as Senior Partner, Global Public Policy, Regulatory & Governmental Strategy. Tom leads Flagship’s public policy, regulatory, and governmental affairs functions, including development and execution of…

If you see an error in this story, contact us.