ABOUT THE EVENT
Date: Tuesday, March 17, 2026 1–2pm EDT
Event Location: Virtual
Artificial intelligence is rapidly reshaping the life sciences, from automating repetitive tasks to enabling advanced scientific and safety assessments across drugs, biologics, and devices. As adoption grows across regulators, pharmaceutical companies, technology developers, nonprofits, and collaborative consortia, the need for clear, risk-based frameworks for validation, governance and human oversight becomes increasingly critical.
This webinar introduces the DIA Artificial Intelligence Consortium, a neutral, public‑private partnership that convenes regulators, biopharmaceutical companies, academia, and technology providers, including FDA, Health Canada, MHRA, PMDA, IQVIA, Gilead, Otsuka, BeOne Medicines, Beth Israel Deaconess Medical Center–Yale School of Medicine, and others. Consortium partners are building a 7‑step AI use case classification framework, risk‑proportionate validation and monitoring approaches, and aligned regulatory terminology that reflect how global authorities are stratifying AI use, from low‑risk administrative automation to decision‑critical analyses influencing clinical or labeling decisions.
Speakers will walk through use cases across regulatory, clinical, and manufacturing contexts and highlight where human‑in‑the‑loop oversight, documentation expectations, and Good Machine Learning Practice–aligned validation should differ by context of use.
Featured Speakers