While Quantum computing has been reported to be five years away for many years now, companies are preparing for it by setting foundations with AI in development.
Quantum computing is rapidly gaining traction in the life sciences, with growing attention from global institutions and increasing experimentation in R&D. The United Nations proclaimed 2025 the International Year of Quantum Science and Technology as a public awareness campaign across sectors, including healthcare. Most recent quantum applications have revolved around drug discovery, from quantum models designing cancer drugs to building better Alzheimer’s disease models. In many of these efforts, quantum is being integrated with existing artificial intelligence (AI) models to enhance modeling power and insights.
It’s a space of excitement and promise, but also trepidation. Sponsors have never been under more pressure to move trials along faster and maximize budgets. Current AI models have already proved valuable as tools to optimize trial efficiency—a strength that quantum would hope to capitalize on. Yet, bringing quantum into the clinical trials arena also raises concerns for data privacy and cybersecurity.
The pressure is on industry to adapt to the breakneck pace of AI evolution and prepare for quantum. “Six months back, we didn’t have a reasoning model. Now, we have a reasoning model,” said.
Quantum Potential, AI Reality
The buzz surrounding quantum computing promises much—essentially a broad sweeping end-to-end enhancement of every step in the drug design, development, and clinical trials process.
But that future depends on foundations being built today. AI may not yet simulate quantum-scale complexity, but it’s already delivering tangible, scalable improvements across the clinical trial workflows. And unlike quantum, which is still largely in exploratory stages, AI is ready to implement now.
AI-driven models are helping sponsors streamline patient recruitment, data collection, and real-time monitoring, which shortens trial timelines overall. For example, sponsors often rely on diagnosis codes to select patients, but those codes are primarily designed for revenue cycle management, not eligibility screening. The real wealth of data typically lies in the doctor’s notes, which AI can mine to uncover eligibility signals that structured codes often miss.
A trial might need a stage three to four cancer patient with three weeks of disease progression—details “typically not captured in revenue cycle management software, which is what’s driving most real-world data,” said Sankarasubbu. Incorporating this approach can help streamline recruitment and reduce startup delays.
AI can also standardize and harmonize trial data across studies with widely varying structures, endpoints, and patient populations. This fragmentation complicates preparations for internal review and regulatory submission. “By automating data harmonization, AI accelerates these processes and improves consistency,” he added.
AI-powered dashboards further support compliance and safety by flagging serious adverse events, enabling physician-led interrogation of live trial data, and surfacing critical issues through intelligent visualization.
Together, these applications contribute to the core value proposition of AI in trials: shortening timelines. “[Sponsors] need to compress clinical trials in multiple small areas to get that timeline compression, which will result in a cost compression as well,” said Sankarasubbu.
Faster trials help sponsors bring products to market sooner, recoup investments more quickly, and maximize revenue during exclusivity periods. Patients, in turn, benefit from earlier access to potentially life-changing therapies, especially in areas of high unmet medical need.
Sponsor Best Practices for Responsible AI Use
As AI becomes more integrated into clinical research, regulators are racing to keep pace and have raised concerns around model credibility and explainability. Sponsors need to demonstrate not just what the model does but also why it does it and how those decisions are made, said Sankarasubbu. He recommends that sponsors prioritize model validation, transparency, and ethical workflows like human-in-the-loop.
Model validation should begin early and be clearly documented, especially when using generative models or open-source APIs. Sponsors relying on a single output take on risk as models evolve consistently. At Saama, Sankarasubbu employs a “decision by jury” strategy when validating models.
“I’m a big fan of Law and Order. The inspiration comes from how the jury system works in the courts—12 people have to agree to make a decision,” he said. “Similar thing here. We make models compete against each other to come to an agreement. No agreement means nothing.”
Sponsors must then step further and give regulators insight into the entire model testing and implementation process. Transparency builds trust—sponsors should proactively show how models are trained, what data they’re built on, and how outputs are reviewed.
Importantly, AI is meant to augment, not replace, said Sankarasubbu. Every design should have a human approving the decisions that a model generates. “We always treat it like a recommendation system from Netflix,” he said. “Netflix can recommend a movie, but at the end of the day, it’s you deciding whether to watch that movie or not.”
Managing Hallucinations, Bias, and Patient Privacy
AI is powerful and can improve existing workflows in clinical trials—but it isn’t infallible. These models sometimes hallucinate, producing incorrect information and presenting it confidently as factual.
In clinical contexts, hallucinations can pose real risks if models are not properly constrained. “Sponsors can manage this by grounding models in clinical-specific data, using smaller, fine-tuned models trained on structured sources like ClinicalTrials.gov and historical reports,” said Sankarasubbu. His team published a now widely used paper comparing open source models to determine which ones are hallucinating and benchmarking for industry use.
Data bias can also present problems in AI models. Bias often arises not from the model but from the data it’s trained on. “It’s not that algorithms by nature are biased. It’s the data we feed in and how these algorithms interpret that,” he said. “You need to have control and understand the history of how the data was collected.”
In patient recruitment, for example, bias can lead to downstream problems like a lack of efficacy in underrepresented groups. Having a firm understanding of how training data was sourced, what populations are represented, and what gaps might exist enables proactive bias mitigation.
For AI to do its job, it must handle large amounts of sensitive patient data from clinical trials. Sankarasubbu emphasizes the implementation of common medical principles to protect patient privacy. It’s important for sponsors to de-identify patient data before model training or deployment and to limit access to sensitive data throughout the AI development lifecycle—not just to comply with HIPAA and GDPR, but to reduce exposure to cybersecurity threats.
“That should be done right from the beginning, right from model training or deployment,” he said. “People who don’t need access to data should not have access to that particular data in production.”
Sponsors can further reduce exposure by collecting only essential data and minimizing how much sensitive information the model handles. Some may also consider a federated learning approach, where models learn from decentralized data sources without that data ever leaving its original location.
Looking Ahead to Quantum and Responsible Innovation
Sankarasubbu predicts that quantum will come to the forefront in roughly the next five years. But the evolution of this technology doesn’t occur in isolation.
“When these things are evolving, your other aspects also evolve quite a bit,” he said. “It’s not like evolution happens at only one place and does not happen at the other.”
To be “quantum ready”, sponsors should focus now on building strong AI governance and data protection practices and continue evolving alongside the technology itself. That means structured workflows for models across the clinical trial lifecycle and following best practices for responsible AI use. It also means ensuring transparency—documenting how model outputs are generated and reviewed, and proactively communicating that internally and with regulators to build trust.
Ultimately, those who will benefit most from AI and quantum won’t be the ones chasing novelty. They’ll be the ones investing in structure, oversight, and long-term value.
This article was written in partnership with Saama.