Implementing The Optimal Clinical Data Delivery Infrastructure to Facilitate Data Access and Review

Share:
GUEST BLOG

David Weigand

Today’s influx of clinical data is as diverse as it is abundant, and scientific advancement hinges on understanding that wealth of information. At Bristol-Myers Squibb, driving innovative therapies for patients is at the heart of our work. That’s why real-time access to clinical trial data is so important. By improving the infrastructure that supports our data access and review process, BMS can bring detailed insights to key stakeholders at a speed that continues to push our global health initiatives. Here, I’ve outlined just how we’re achieving success by implementing the elluminate® clinical data delivery infrastructure that promotes greater data access and a more efficient review process.

The Current Architecture at BMS

As the footprint of non-CRF data grows exponentially, we saw several key issues with our current model of delivery. In our current infrastructure, we use a batch uploader to move that data into the existing platform. This is an intensive process, because we must build forms within the data capture system. On top of creating the forms to hold the data, there is a significant load to the EDC system to store the data and move it for migrations during post-production changes. 

There is also limited control and access to the data itself. We currently use a custom extract solution that was developed to decide what forms would be grouped into a super set of what we call a Consolidated Clinical View (CCV). However, if there are any problems with the system we are unable to find, understand and correct the root cause of it. The user access management is cumbersome as well, since the access and privileges occur on a folder level rather than a study level. At the same time, the way in which data is organized impedes cross-study review and analysis of the data because it is segregated into study-specific directories.

Finally, there is a lack of data visibility across the entire architecture. With limited end-to-end versioning and traceability capabilities, we are unable to see how data was moved, who had access to it and when it was accessed. We aimed to remedy these issues by engaging with a new partner to develop a more streamlined and robust solution. 

The Optimal Clinical Data Delivery Infrastructure for Data Access and Review

While the majority of our studies currently run on the architecture described above, we have recently adopted a new solution that utilizes eClinical Solutions’ elluminate clinical data workbench, mapping and analytics. During our initial adoption, we used five early studies that had no critical database lock or analysis milestones and contained less data than our larger phase three trials. As we begin to adopt more studies into the new architecture, we’ve found success in solving these issues from our previous model of clinical data delivery:

  • It is easier to handle the volume of non-CRF data, as we no longer have to load it into the existing platform. Instead, the data is consumed automatically into elluminate through the importer module. As we identify new external data sources, it becomes quicker to adopt the data into the new infrastructure. We can now access information more efficiently and push that data for review and analysis.
  • The use of Mapper and elluminate replaces the custom extract tool that we used for CCVs. If there are issues, we can easily identify the cause and take the necessary steps to get data flowing again. We can also control if there are changes in how content is delivered. For example, if we need to create an additional variable that we want to include as part of the CCV, we can now do that through the Mapper utility. 
  • With Mapper pushing data into a standardized reporting structure, cross-study views are automatically enabled. There is no need to write custom SAS code to aggregate the data in a customized way; we can do that within elluminate itself.
  • The exporter utility also gives us the ability to determine visibility into where we moved the data once it has left the elluminate system. If it’s been pushed to a statistical computing environment, we know when that occurred, how often and to which directory locations it went.

Learn more about the vision, goals and lessons learned thus far from the BMS implementation of the elluminate® Clinical Data Platform in this webinar case study presented by David Weigand, Director of Clinical Data Reporting and Analytics at Bristol-Myers Squibb.

With this new infrastructure, we now have a single access point for clinical data. This allows us to disperse, review and consume that data as we need to. What has been a key benefit to this new delivery system is that it has been seamless. elluminate easily integrates into our current architecture and promotes data visibility and data flow for streamlined, governed access. This ultimately leads to far deeper insights, and helps BMS achieve its goal of transforming lives through science. 

David Weigand

David Weigand

Director, Clinical Data Reporting and Analytics
Bristol-Myers Squibb

David Weigand brings over 15 years of clinical research experience in various clinical programming, data management and project management roles across multiple therapeutic areas. In his role as Director of Clinical Data Reporting and Analytics at Bristol-Myers Squibb, David provides operational leadership of the clinical data reporting and analytics function enabling the presentation of clinical trial datasets, data reports, listings and visualizations to various stakeholder groups for data review, patient safety oversight, protocol deviation and data trend detection, internal decision making and statistical analysis. He holds a Bachelor of Science in Nursing from Penn State University.

Leave a Reply

Your email address will not be published. Required fields are marked *

LIVE WEBINAR: Introduction to ASCEND Clinical Data Services – August 20, 2020 at 1 PM ET/ 10 AM PT

Register Now
X
Scroll Up