Skip to main content

Data Integration - The Ontogeny of Chemical Data


It is great to be able to perform confederated queries across data sources, really great. The greatness of this leads to the development of common representational and access standards (things like MIABE, InChI, and PSIQUIC, as examples that we are involved in ourselves). This ability to take data out of silos and share has arguably been one of the defining elements of science in the last ten years, and the generalisation of this data sharing and representation via semantic web technologies may become one of the defining features of the next decade. One of the issues though is that of data provenance, and primacy, and this is an issue that we have started to think about, and plan for in our own data integration efforts.

Data enters the 'system' somewhere in a 'database'; and these bundles are then 'licensed' to the community. The licenses may be silent, ambiguous, as clear-as-a-bell, shockingly unacceptable, or whatever - and there is a clear need for standardisation of licenses, or at the very least the requirement that license terms are freely available on the resource web site.

There are arguably two basic types of chemical databases at the moment - primary and secondary - primary databases are the first point of entry of that on the internet, or to the user community, and the groups responsible for these typically focus on data novelty (providing something that others don't) and also typically care about curation and indexing of the data. The current funding system does a pretty good job minimising overlap between funded primary databases. Primary data can be a one-off effort, a passive archive, or regularly updated with new content and curated; and there are many other nuances to consider. These primary databases tend to have a theme - spectroscopic data, or bioactivity data, or synthesis, etc., and be relatively small. Together they are the substrate for the secondary databases. There are arguably (again this is all just top of the head thinking) two types of integration that people typically do - 'vertical' and 'horizontal' - by 'vertical' I mean bundling all the chemical objects together, or bundling all the non-redundant proteins together, or bundling all protein structures and protein models together, and by 'horizontal' I mean integrating across different object classes, e.g. protein structures and small molecule ligands, or genes and protein structures, etc. The 'vertical secondary' databases typically add little new data, but confederate primary data content, reduce redundancy, provide cross-references, etc. but they often add significant value in terms of new descriptors, services, etc. It is also often a lot easier to curate data when integrated across other resources. Secondary databases may be physical (in that copies are made of the primary databases and all loaded together) or virtual (they query the primary resources through an API).

Secondary (vertical) databases do a great job of integration across these disparate resources, allowing queries across unlinked primary databases, and this is a required task for the more challenging horizontal integration. This horizontal data integration is probably where the majority of impact and insights from data are going to come from - for example there are great opportunities to integrate drug pharmacokinetic and pharmacodynamic data with human genetic variation data, to look for opportunities to deliver currently oral drugs topically, or to mine patent/marketing exclusivity expiries and look for arbitrage opportunities between diseases/healthcare systems, for the more entrepreneurial.

So some examples of primary and secondary chemistry databases - we like to think of ChEMBL as a primary database (more specifically the literature and deposited data, this makes up the majority of the data we have). We think of ChemSpider as a secondary database, and some databases like PubChem are arguably a mix of primary (for the NIH bioactivity data) and secondary (for compound catalogues and from other databases).

Is this view of chemical databases useful at all? Maybe not. But it maybe poses a couple of questions.
  • What are the optimal mechanisms for curation and error correction of data? Is it performed at the secondary or primary levels?
  • What are the primary and secondary resources - is it worth providing tracking of data provenance ? Should there be a standard format for the manifest of secondary resources? 
  • Secondary resources need to have effective update and correction capabilities driven by updates in their underlying primary resources (this is quite a poor area at the moment it seems - loaded once, a few years ago, doesn't mean that a secondary resource contains the best view of the primary data)?
  • How is licensing addressed in secondary resources? - I've found quite a few examples of where I can't download a dataset from it's primary website, yet it is contained in a secondary resource, nominally freely available.
  • Where should funding focus go - to standardising access and indexing of primary resources, or into consolidating data into secondary resources.
We have a couple of spreadsheets of various chemical database resources, licenses, a classification as primary/secondary, etc. - restricted of course to the sort of things we are interested in ;). If there's interest (post a comment to this post, or mail me), we have a web meeting to present this preliminary work. Some of these things are related to the activities of the Open PHACTS, project, so it will be interesting to see how they are addressing some of these.

Comments

Popular posts from this blog

Improvements in SureChEMBL's chemistry search and adoption of RDKit

    Dear SureChEMBL users, If you frequently rely on our "chemistry search" feature, today brings great news! We’ve recently implemented a major update that makes your search experience faster than ever. What's New? Last week, we upgraded our structure search engine by aligning it with the core code base used in ChEMBL . This update allows SureChEMBL to leverage our FPSim2 Python package , returning results in approximately one second. The similarity search relies on 256-bit RDKit -calculated ECFP4 fingerprints, and a single instance requires approximately 1 GB of RAM to run. SureChEMBL’s FPSim2 file is not currently available for download, but we are considering generating it periodicaly and have created it once for you to try in Google Colab ! For substructure searches, we now also use an RDKit -based solution via SubstructLibrary , which returns results several times faster than our previous implementation. Additionally, structure search results are now sorted by

Improved querying for SureChEMBL

    Dear SureChEMBL users, Earlier this year we ran a survey to identify what you, the users, would like to see next in SureChEMBL. Thank you for offering your feedback! This gave us the opportunity to have some interesting discussions both internally and externally. While we can't publicly reveal precisely our plans for the coming months (everything will be delivered at the right time), we can at least say that improving the compound structure extraction quality is a priority. Unfortunately, the change won't happen overnight as reprocessing 167 millions patents takes a while. However, the good news is that the new generation of optical chemical structure recognition shows good performance, even for patent images! We hope we can share our results with you soon. So in the meantime, what are we doing? You may have noticed a few changes on the SureChEMBL main page. No more "Beta" flag since we consider the system to be stable enough (it does not mean that you will never

ChEMBL brings drug bioactivity data to the Protein Data Bank in Europe

In the quest to develop new drugs, understanding the 3D structure of molecules is crucial. Resources like the Protein Data Bank in Europe (PDBe) and the Cambridge Structural Database (CSD) provide these 3D blueprints for many biological molecules. However, researchers also need to know how these molecules interact with their biological target – their bioactivity. ChEMBL is a treasure trove of bioactivity data for countless drug-like molecules. It tells us how strongly a molecule binds to a target, how it affects a biological process, and even how it might be metabolized. But here's the catch: while ChEMBL provides extensive information on a molecule's activity and cross references to other data sources, it doesn't always tell us if a 3D structure is available for a specific drug-target complex. This can be a roadblock for researchers who need that structural information to design effective drugs. Therefore, connecting ChEMBL data with resources like PDBe and CSD is essen

ChEMBL 34 is out!

We are delighted to announce the release of ChEMBL 34, which includes a full update to drug and clinical candidate drug data. This version of the database, prepared on 28/03/2024 contains:         2,431,025 compounds (of which 2,409,270 have mol files)         3,106,257 compound records (non-unique compounds)         20,772,701 activities         1,644,390 assays         15,598 targets         89,892 documents Data can be downloaded from the ChEMBL FTP site:  https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_34/ Please see ChEMBL_34 release notes for full details of all changes in this release:  https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_34/chembl_34_release_notes.txt New Data Sources European Medicines Agency (src_id = 66): European Medicines Agency's data correspond to EMA drugs prior to 20 January 2023 (excluding vaccines). 71 out of the 882 newly added EMA drugs are only authorised by EMA, rather than from other regulatory bodies e.g.

In search of the perfect assay description

Credit: Science biotech, CC BY-SA 4.0 Assays des cribe the experimental set-up when testing the activity of drug-like compounds against biological targets; they provide useful context for researchers interested in drug-target relationships. Ver sion 33 of ChEMBL contains 1.6 million diverse assays spanning ADMET, physicochemical, binding, functional and toxicity experiments. A set of well-defined and structured assay descriptions would be valuable for the drug discovery community, particularly for text mining and NLP projects. These would also support ChEMBL's ongoing efforts towards an  in vitro  assay classification. This Blog post will consider the features of the 'perfect' assay description and provide a guide for depositors on the submission of high quality data. ChEMBL's assays are typically structured with the overall aim, target, and method .  The ideal assay description is succinct but contains all the necessary information for easy interpretation by database u