Silver Spring, MD, USA - June 25, 2022: The U.S. Department of Health and Human Services (HHS), U.S. Public Health Service (USPHS) and FDA logos are seen at the FDA headquarters, the White Oak Campus.

FDA Solicits Feedback on the Use of AI and Machine Learning in Drug Development

By Matthew Chun

The U.S. Food and Drug Administration (FDA), in fulfilling its task of ensuring that drugs are safe and effective, has recently turned its attention to the growing use of artificial intelligence (AI) and machine learning (ML) in drug development. On May 10, FDA published a discussion paper on this topic and requested feedback “to enhance mutual learning and to establish a dialogue with FDA stakeholders” and to “help inform the regulatory landscape in this area.” In this blog post, I will summarize the main themes of the discussion paper, highlighting areas where FDA seems particularly concerned, and detailing how interested parties can engage with the agency on these issues.

Scope of Discussion Paper

As an initial matter, FDA defines AI as “a branch of computer science, statistics, and engineering that uses algorithms or models to perform tasks and exhibit behaviors such as learning, making decisions, and making predictions.” ML, in turn, is described as “a subset of AI that allows ML models to be developed by ML training algorithms through analysis of data, without models being explicitly programmed.” For the purposes of the discussion paper, FDA uses the term drug development broadly to include “a wide scope of activities and phases, including manufacturing and postmarket drug safety monitoring, among others.”

While acknowledging the great potential that AI/ML holds for improving drug development, the FDA’s discussion paper also highlights potential harms and seeks to promote important conversations about current AI/ML practices, opportunities to better manage risks, and the need for greater regulatory clarity. Namely, the paper addresses three main themes:

  • Landscape of current and potential uses of AI/ML
  • Considerations for the use of AI/ML
  • Next steps and stakeholder engagement

Landscape of Current and Potential Uses of AI/ML

The FDA paper begins its survey of AI/ML in drug development with a discussion of drug discovery, which I have blogged about in the past. However, the paper goes on to explore the use cases of AI/ML in later stages of drug development as well.

At the non-clinical research stage, data from in vitro and in vivo studies can be leveraged using AI/ML for “evaluating toxicity, exploring mechanistic models, and developing in vivo predictive models.” AI/ML algorithms are also improving the accuracy of pharmacokinetic and pharmacodynamic models for nonclinical and clinical applications.

At the clinical research stage, AI/ML is being used to analyze data from clinical trials and observational studies to make inferences regarding the safety and efficacy of drugs. Further, AI/ML can inform the design of non-traditional trials and improve the conduct of clinical trials at every stage from recruitment to dosing regimen optimization to clinical endpoint assessment.

At the postmarketing safety surveillance stage, AI/ML is helping automate the processing of individual case safety reports (ICSRs), the evaluation of cases (e.g., to determine the likelihood of a causal relationship between a drug and an adverse event), and the reporting of ICSRs to FDA.

Finally, at the advanced pharmaceutical manufacturing stage, AI/ML is being used to optimize process design (e.g., manufacturing process design), implement advanced process controls (e.g., dynamically adjusting multiple input parameters to maintain output parameters at desired levels), monitor equipment and products, and identify problem areas for continual improvement.

Considerations for the Use of AI/ML

Having reviewed AI/ML’s potential to accelerate and optimize drug development, the FDA’s discussion paper then proceeds to highlight the possible risks and harms. In particular, the concerns that appear to be prominent on FDA’s radar include:

  • “[T]he potential to amplify errors and pre-existing biases in data sources”
  • Concerns about the “generalizability and ethical considerations” of using AI/ML outside of the testing environment
  • The limited explainability and transparency of many AI/ML models
  • Data privacy and security considerations
  • Issues with reproducibility and replicability

Generally, the paper highlights the importance of addressing these challenges through overarching standards and practices for the use of AI/ML, pointing to existing examples such as the Verification and Validation (V&V 40) framework for assessing the credibility of computational models and Good Machine Learning Practices (GMLP) for medical devices. However, FDA acknowledges that none of these standards are specific to the drug development process and promises to further explore the alignment and consistency of such standards to the drug development sector.

To assist FDA’s thinking about providing regulatory clarity in the drug development space, the discussion paper provides several questions directed to soliciting feedback on the current practices, major obstacles, and opportunities for improvement in three main areas of AI/ML usage:

  1. Human-led governance, accountability, and transparency;
  2. Quality, reliability, and representativeness of data; and
  3. Model development, performance, monitoring, and validation

Based on any feedback it receives, FDA appears committed to establishing a risk-based regulatory approach that respects the diverse uses of AI/ML in the drug development sector and considers the specific context of AI/ML use to “guide the level of evidence and record keeping needed for [] verification and validation.”

Next Steps and Stakeholder Engagement

In addition to soliciting feedback on the above-mentioned topics, the discussion paper notes that FDA is planning to host a workshop with stakeholders to provide an opportunity for further engagement. The paper also promises that FDA will provide several other mechanisms to engage with stakeholders and highlighted existing avenues for discussing relevant AI/ML issues with the agency including the Critical Path Innovation Meetings (CPIM), ISTAND Pilot Program, Emerging Technology Program, and Real-World Evidence Program.

For those interested in submitting feedback to FDA, electronic or written comments are being accepted until August 9, 2023. Instructions for submitting comments are provided here.

Matthew Chun

Matthew Chun is a J.D. candidate at Harvard Law School and patent agent at Fish & Richardson P.C. He holds a DPhil in Engineering Science from the University of Oxford and a B.S. in Mechanical Engineering from the Massachusetts Institute of Technology. At Harvard Law School, Dr. Chun is Managing Editor of the Harvard Journal of Law and Technology and a Student Fellow at the Petrie-Flom Center for Health Law Policy, Biotechnology, and Bioethics. All opinions expressed are solely his own.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.