I'm working in...
Arrow for button
Cross Arrow

I'm working in...

Role
Company
Select

Finding AI’s Place in Clinical Trials

In recent years, artificial intelligence (AI) has slowly but surely made its way into the life sciences industry. But where does it fit in the highly regulated area of clinical trials? Industry has seen AI adopted in various areas of clinical trials such as patient recruitment and predicting outcomes. Applied Clinical Trials caught up with Raj Indupuri, CEO and co-founder of eClinical Solutions to address the role of AI in clinical trials, algorithms, and its future.

Applied Clinical Trials: Can you share with us your perspective of AI in pharma and life sciences today?

Raj Indupuri: While we have been on the path of automation and advanced analytics over the last decade, which has helped us gain and scale our maturity with AI, still a massive amount of innovation on complex, critical, and life-changing use cases has yet to take place for us to reach full potential. AI offers game-changing opportunities to augment intelligence and elevate human knowledge and its applications to transform every aspect of the life science and pharma value chain to get us closer to the future of health—from drug discovery to clinical trials, operations, commercial, pharmacovigilance, etc. literally from molecule to market—which is quite remarkable! Within clinical development today, there is increased complexity with digital trials coupled with pressure to reduce cycle times and increase efficiency. Some powerful use cases and applications include automated digital data flow from protocol to submission, automated clinical trial designing, molecule research for drug discovery, personalized customer and patient engagement, and automated evidence-based safety intelligence. The list is endless, and there’s much work to be done. AI-enabled applications are already helping companies thrive—and widening the gap between companies that embrace these technologies and those waiting for “the future.”

ACT: What challenges are we currently seeing with AI in the clinical trials space?

Indupuri: For companies to embrace AI and become AI-fueled organizations, several challenges exist to overcome. A couple of critical ones to highlight—solving the right problems with AI, driving adoption, and change management by developing human-centered and trustworthy AI. The potential benefits are many, as we discussed, but to realize them, it requires:

  • Building trust in the models—Requires collaboration with stakeholders on their role as ‘human-in-the-loop’ to provide insight into the inner workings of the AI models and how they are arriving at their predictions. If end users and stakeholders don’t trust these models, it will be difficult to produce them.
  • Proper change management—Upskilling people to use these digital and AI-augmented capabilities and ensuring the business processes are updated to integrate these capabilities for day-to-day operations and decision-making.
  • Knowledge and explainability of the decision-making process—Imperative for building the trust that drives adoption. It helps end users understand how AI models arrive at their conclusions, validate results, uncover biases or errors, properly use the predictions to make informed decisions, interpret these results correctly, and understand why the model is making a given prediction or making this recommendation.
ACT: Why is explainability crucial for successfully adopting and implementing machine learning?

Indupuri: When AI models are black boxes, it becomes challenging for stakeholders to understand how the models arrived at their predictions or recommendations. AI interpretability provides insights into the internal workings of AI models, such as the underlying factors, features, or patterns that contribute to the predictions, and explains the decision-making processes and outputs of these models. In the life science industry, where AI is increasingly used to make critical decisions and predictions with profound implications where lives can be at stake, such as disease diagnosis, treatment planning, or drug development, interpretability becomes vital in ensuring transparency, trustworthiness, and regulatory compliance.

Understanding the decision-making process helps scientists, clinicians, and regulators objectively assess how AI models arrive at their conclusions, thereby enabling them to gain confidence, validate the results, identify potential biases or errors, and make informed decisions based on the AI’s predictions.

The life science industry often faces ethical challenges in deploying AI systems, such as privacy, fairness, and accountability issues. Interpretability can help address these concerns by revealing potential biases or unfair practices embedded in the AI models. It allows for identifying and rectifying discriminatory factors, ensuring fairness in decision-making, and reducing the risk of unintended consequences.

Overall, AI interpretability is of paramount importance in the life science industry. By making AI models more transparent and accountable, interpretability contributes to AI’s responsible and reliable application in life science research, healthcare, and drug development.

ACT: How can drug developers recognize when an AI algorithm is underperforming, and how does governance play a role in this?

Indupuri: Understanding and improving AI model performance go hand-in-hand. In many companies, teams are devoted to testing and verifying AI model performance, tasked with reviewing predictions and verifying, and then labeling whether those predictions were accurate. This ground-truth labeling ensures the ‘human-in-the-loop’ and is critical to measuring and monitoring performance to create a feedback loop mechanism for models to learn and adapt continuously.

Also, governance is key to success and higher-performing models. How do you manage all these datasets, and who can access what? How do you ensure and communicate interpretability and explainability to ensure that these models are providing the intended results; what is the governance around testing and verifying that the outputs of these models are high-performing? You need to answer these questions and establish processes to deliver high-performing models.

ACT: Can you share your perspectives on the future of AI/ML in clinical trials and drug development?

Indupuri: Going back to my earlier comment, we are simply scratching the surface regarding the use cases we are solving to become AI-fueled companies; as an industry, we still have some ways to go to transform data into insights that enable tangible outcomes and modernize and scale infrastructure for the future of health. Based on the unprecedented changes and evolutions this industry is seeing, it would be fair to say that predicting what clinical trials will look like in the next 25, 10, or even five years can only be speculative. But from where we stand today, AI is the answer to bringing disruptive innovations to the life science industry, but the future is all about data. We have made considerable strides in approaching personalized medicine, AI can get us closer to realizing the vision. The point to walk away with is that data is the future of health, and AI will deliver the outcomes.


By submitting, you agree to the processing of your personal data by eClinical Solutions as described in our Privacy Policy.

2024 Industry Outlook: Driving Tomorrow’s Breakthroughs with Clinical Data Transformation

X