Sponsored

How DIA Is Helping Regulators Turn AI Principles into Everyday Review Practice

Through its AI Consortium and global forums, DIA is translating high‑level AI guidance into concrete workflows that match oversight to risk, from low‑stakes automation to decision‑critical regulatory analyses.

DIA is a leading global interdisciplinary life science membership association, which drives multi-sector collaboration to speed up medical product development innovation in pursuit of improved patient outcomes. During the 2026 DIA Global Annual Meeting, scheduled to take place in Philadelphia, PA this summer, many discussions and presentations will focus on how regulatory agencies, including FDA, are navigating the opportunities and challenges of incorporating artificial intelligence (AI) into the regulatory review process, while at the same time maintaining public trust, scientific rigor, and human oversight.

In 2025 and 2026, the FDA is increasingly integrating AI into its regulatory review processes for medical products. This includes using AI to assist in scientific and safety evaluations of drugs, biologics, and medical devices and diagnostics, as well as to streamline the review process itself. AI is being used to automate repetitive tasks, accelerate review timelines, and reduce the administrative burden on FDA staff to free up the experts to focus on more complex aspects of the review. The FDA’s increasing adoption of AI in regulatory review signifies a shift towards more efficient and data-driven decision-making.

All global regulators are talking about how to evaluate the level of risk and take an approach that is tailored –a human is always in the loop for the high-risk decision making and no human in the loop for those areas where administrative efficiencies are being created. Models must be appropriately validated, particularly in contexts where no human oversight is present, to mitigate risks such as hallucinations or unsupported outputs. Testing not only the model itself but the workflow in which the model operates is critical.

DIA’s AI Consortium is a public private partnership launched in 2025 and serves as a neutral, pre-competitive forum for connecting regulators, industry, academia, and technology providers with a focus on operationalizing guidance and addressing how organizations translate risk-based principles into day-to-day workflows. The consortium has three working groups, one of which is creating a validation framework emphasizes that reliability must be demonstrated at both technical and operational levels to prevent misplaced trust in partially validated tools.

Additionally, consortium partners are currently mapping how global regulators are stratifying AI use cases according to risk and context of use, from administrative automation to decision-critical analyses. This use-case mapping helps organizations understand not only where AI fits within regulatory workflows, but also what level of validation, documentation, and human oversight is appropriate for different categories of AI use. This classification supports proportional validation approaches, ensuring low-risk efficiency tools (like summarization or data extraction) have lighter oversight, while models influencing clinical or labeling decisions undergo structured validation aligned with Good Machine Learning Practice. Some of their findings will be presented at the DIA Global Annual Meeting this June.

Beyond current uses, AI can also play a role in enhancing post-market surveillance of medical products, helping to identify potential safety signals and trends more quickly. This is an application many other regulatory agencies, such as ANVISA, MHRA, PMDA, and Health Canada are also currently pursuing. Future applications also include utilization of AI to facilitate predictive risk assessments where historical and real-time data can be analyzed to predict potential future risks associated with drugs, clinical trials, or manufacturing processes, enabling proactive intervention.

The context of use of AI determines the level of risk for a given decision and therefore is usually coupled with a human in the loop for high-risk environments. Humans need to have oversight to catch errors and mistakes, especially for higher risk situations, and make final decisions based on individual contexts. Like any large-language model, time and training data are key. The benefit of what AI is going to bring to regulators is only going to be realized through emphasizing the need for transparency and explainability in AI algorithms, ensuring that the basis for decisions made by AI can be understood and audited.

Additionally, mitigating potential biases in AI algorithms, ensuring that they do not lead to unintended disparities in health outcomes, is going to be important as well. As AI models can evolve over time, requiring ongoing monitoring and adaptation of regulatory approaches to ensure the safety and effectiveness of AI-enabled products is critical.

Over the past year, regulators have released several influential guidance documents addressing AI credibility, validation, and risk-based oversight. The 2025 FDA’s draft document from January 2025 proposes a framework where the required level of scrutiny for an AI model depends on its context of use (COU) and the potential consequences of an incorrect decision. That is called Risk-Based Credibility Assessment. The EMA Network Data Steering Group (NDSG) has outlined six workstreams, including one on AI, aiming to leverage it for improved data analytics and regulatory processes within the European Medicines Regulatory Network. The EU AI Act is a legislation that classifies AI systems based on risk, with different levels of requirements and obligations for each category, demonstrating a broad regulatory approach to AI systems across EU member states.

Additionally, in early 2026, a joint FDA-EMA paper came out. MHRA, PMDA and ANVISA also structured their work around AI and have published several important guidance documents. Many of these developments will be covered during the Global Annual Meeting Townhalls as well as in AI-focused sessions throughout the event. Through forums such as the Global Annual Meeting and initiatives like the AI Consortium, DIA is helping shape a shared, global understanding of how trustworthy AI can be responsibly integrated into regulatory decision-making while preserving scientific rigor and human judgment.

Philadelphia, PA | June 14-18
The DIA Global Annual Meeting is where life sciences professionals from industry, regulatory agencies, patients, and academia come together to drive breakthroughs in healthcare — advancing science, policy, and patient outcomes worldwide.
Advanced Rated pricing available through March 12th.
Tuesday, March 17, 2026 1–2pm EDT
This webinar introduces the DIA Artificial Intelligence Consortium, a neutral, public‑private partnership that convenes regulators, biopharmaceutical companies, academia, and technology providers. Speakers will walk through use cases across regulatory, clinical, and manufacturing contexts and highlight where human‑in‑the‑loop oversight, documentation expectations, and Good Machine Learning Practice–aligned validation should differ by context of use.

Sponsored content is written and provided to BioSpace by the advertiser. It is published with the advertiser’s approval without contribution from BioSpace’s editorial and insights teams.

Maria Vassileva
Maria Vassileva Maria Vassileva
Chief Science and Regulatory Officer | DIA
Stephanie Rosner
Stephanie Rosner Stephanie Rosner
Program Manager, Artificial Intelligence | DIA
MORE FROM THIS COMPANY
Health-focused nonprofits like the Drug Information Association (DIA) serve as crucial bridge-builders in the healthcare ecosystem during a time of uncertainty.