Delivering Data Driven Value

The Final Frontier of LOTF: Digitalizing In Vivo Research

The digitalisation of in vivo research has traditionally lagged behind other areas of the drug discovery pipeline, preventing research institutions from realising their vision for a Lab of the Future. It’s estimated that pre-human research accounts for over 40% of the R&D costs per approved new compound, and in vivo research constitutes a large portion of this expenditure. Pharmaceutical and biotechnology companies can no longer wait to realise the benefits of digitalising such a vital piece of their portfolio.

In this talk, we will discuss the state of digitalisation in in vivo research, explain why this phase of drug discovery is called the “final frontier”, and will examine actual requests from pharmaceutical and biotech companies when tasked with bringing new software into their existing technology stack to meet modernisation initiatives.

Lastly, we will present a use case with a large contract research organisation tasked with replacing homegrown, purpose-built software used in tandem with legacy spreadsheets. We will examine their challenges and wins, including metrics and impact on R&D costs, while implementing a digitalisation initiative.

Speakers:
  • Julie Morrison, President, Rockstep Solutions
  • Amy Huff, Director Of Operations at Charles River Laboratories
  • Jason M. Davis, Product Owner Director at Charles River Laboratories

DataFAIRy to drive AI adoption

The launch of the second phase of its DataFAIRy: Bioassay project, which aims to convert bioassay data into machine-readable formats that adhere to the FAIR guiding principles of Findable, Accessible, Interoperable and Reusable. 

Enterprise Named Entity Recognition (NER) and Linking with Kazu

In this webinar, will be teaching the inner workings of Kazu, and how you can use and configure it for your own use cases. Topics that will be covered include:

  • The Kazu data model
  • Out of the box ontologies
  • Parsing your own ontology or knowledgebase
  • Kazu NER features
  • Kazu Entity Linking features
  • Managing curations
  • Scaling Kazu with Ray
  • The Pistoia Alliance NLP Use Case Database Project
 

IDMP Ontology Community of Interest Meeting – July 2023

A well-defined ontology that bridges between regional and functional perspectives on common substance-related data objects and global and scientifically objective representations is required. The goal of our project is to build an IDMP Ontology that enables deep, semantic interoperability based on FAIR principles to enhance and augment the existing ISO IDMP standards

Swiss Personalized Health Network – From Clinical (Routine) Data to FAIR Research Data

The Swiss Personalized Health Network (SPHN) has created a national framework for standardizing the semantic representation of health data, in alignment with the FAIR principles.

This framework is implemented in all Swiss university hospitals and utilizes a universal exchange language built upon international standard vocabularies, creating atomic building blocks of knowledge that can be applied in various contexts.

The syntax linking these semantic building blocks is conveyed through RDF, allowing for the representation of both terminologies and data as linked data. This enables researchers to easily combine subsets of data from different sources and access clinical knowledge within their research.

Making FAIR at Source Actionable in the Pharma Research

Since their publication in 2016, the FAIR principles have become synonymous with good data management practices. The value of becoming FAIR compliant in pharma has been demonstrated time and again as accelerated innovation, reduced lead time to discovery, elimination of data silos, improved efficiency and ability to do advanced analytics.

Despite the potential benefits, most efforts for FAIRification have been successful on a smaller scale with specific value cases. Large scale implementation of FAIR is often hampered by legal, technical, financial or organizational challenges. Creating a FAIR ecosystem requires that the infrastructure for data capture not only enables but also ensures that data is FAIR right at the point of creation (FAIR@source).

The success of this ecosystem is predicated on an organizational culture where data is treated as an asset. Over the past few years since we at Novo Nordisk made a commitment to making data generated in Research FAIR.

We’ve had many learnings along the way the most important of which is the need to make FAIR an achievable aspiration. Towards this, we have created a FAIR maturity framework that breaks down the FAIR principles into different levels of maturity customized to Novo Nordisk culture, that various teams can aim to achieve within a specific period of time. This approach ensures that the change management happens in iterative actionable chunks where the wins are clear.

In this session, we will share the learnings we had along the way along with the general framework that was created and welcome input from others who have been on this journey and can help us perfect the framework.

BMS Enabled FAIR Practices: R&D Information Landscape

Bristol Myers Squibb is proud to share a proposal to standardize our data and information models. The artifacts introduced today serve as foundational knowledge of Research and Development and attempt to capture factual and tacit knowledge in this space. The intention is to collaborate amongst peers to build a standard R&D industry information model.

We will share our first draft of the information models and sub-models, key entities in this space, and their relationships. The models are intended to be shared in public interest so that others will use them to drive FAIR agenda and will help us build knowledge products that are interoperable and reusable.

Speakers

  • Umesh Bhatt: Director, Digital Semantic Engineering, Bristol Myers Squibb
  • Murali Kala: Digital Information Strategist, TargetArc., Inc.

Bioassays Have An Integration Problem: Collaboration Will Be Key To Making Them FAIR

Whilst life science companies have come to recognize data as their greatest asset, it is also their greatest challenge. The answers to the biggest questions facing the industry today could already be held within the countless proprietary experiment notes, published literature, and patient records produced in previously conducted experiments. The data landscape is continuously growing in complexity and scale as organizations generate more research, but much of it is siloed in different formats and locations. This makes it difficult to discover, query, and share—rendering data essentially unusable.  

Bioassay protocols are one such example where legacy data management systems are holding R&D back, and where adopting the FAIR (Findable, Accessible, Interoperable, Reusable) principles would improve the usability of the data. Bioassay protocols constitute the essential metadata for most of the experimental results collected in the process of drug discovery. While assay protocols are widely accessible—often stored in public data banks—they are universally kept in plain-text formats. This means they are not machine-readable and therefore require manual review, which takes considerable time investment by highly qualified professionals. Scientists must spend significant amounts of time sifting through vast libraries of old records; there are currently more than 1.4 million unformatted bioassays. Pistoia Alliance research found that some researchers may spend up to twelve weeks per assay selecting and planning new experiments. 

Quantum Computing: What are the Pharma Use Cases?

The Pistoia Alliance in collaboration with QED-C, QuPharm, and QPARC has been hosting a monthly Community of Interest Meeting bringing together the Quantum Computing (QC) Experts from the Pharma and QC Industries to explore the potential of this new technology on the pharmaceutical industry. Despite much ‘hype’ around how quantum computers will potentially revolutionize drug discovery, the precise pharma use cases wherein a quantum advantage will be someday demonstrable relative to classical computing have remained somewhat a mystery to the larger pharma/life science community. The goal of this event is to ‘bridge the gap’ between quantum experts and the rest of the pharma/life science industry with presentations from many of the top pharma quantum computing practitioners who will explain the work they have been doing, which type of problems are amenable to QC, conclusions based on existing technology, and anticipated benefits to using QC with future technological advancements.

Applying the FAIR Data Principles to Paediatric Clinical Trials: Lessons Learned from Conect4Children

This webinar explores why the FAIR data principles are so important for data that is collected during rare and paediatric clinical trials. We introduce the conect4children (c4c) project and present the network that has been established under the IMI2 grant. c4c has a number of interesting tasks around the harmonisation of clinical trial data including the development of a paediatric data dictionary, a Paediatric User Guide (developed in collaboration with CDISC) and activities exploring the connectivity between real world data and paediatric clinical trials.

At the heart of all of c4c’s data harmonisation efforts are the FAIR data principles, and the challenging task of applying FAIR to data collected at the CRF (or patient) level. This webinar will present the approach c4c is taking to FAIRification, the collaboration with FAIRPlus and how we plan to tackle the harmonisation of disease specific data. We are planning an interactive webinar with lots of opportunity for questions and discussion. We hope you will join us.

C4C: https://www.imi.europa.eu/projects-results/project-factshee ts/c4c