Skip to main content

Data checks


ChEMBL contains a broad range of binding, functional and ADMET type assays in formats ranging from in vitro single protein assays to anti proliferative cell-based assays. Some variation is expected, even for very similar assays, since these are often performed by different groups and institutes. ChEMBL includes references for all bioactivity values so that full assay details can be reviewed if needed, however there are a number of other data checks that can be used to identify potentially problematic results.

1) Data validity comments:

The data validity column was first included in ChEMBL v15 and flags activities with potential validity issues such as a non-standard unit for type or activities outside of the expected range. Users can review flagged activities and decide how these should be handled. The data validity column can be viewed on the interface (click 'Show/Hide columns' and select 'data validity comments') and can be found in the activities table in the full database.

* Acceptable ranges/units for standard_types are provided in the ACTIVITY_STDS_LOOKUP table. An exception is made for certain fragment-based activities (MW <= 350) where the data validity comment is not applied.

2) Confidence scores:

The confidence scores reflect both the target type and the confidence that the mapped target is correct (e.g. score 0 = no target assigned, score 9 = direct single protein target assigned). In cases where target protein accessions were unavailable during initial mapping, homologues from different species/strains have sometimes been assigned with lower confidence scores. Curation is ongoing and confidence scores may change between releases as additional assays are mapped (or re-mapped) to targets. The confidence scores can be viewed on the interface and are found in the assays table in the database.

3) Activity comments:

Activity comments capture the author or depositor’s overall activity conclusions and may take into account counter screens, curve fitting etc. It may be worth reviewing the activity comments to identify cases where apparently potent compounds have been deemed inactive by depositors. For further details on activity comments, see our previous Blog post. The activity comments can be viewed on the interface and are available in the activities table of the database.

4) Potential duplicates:

Bioactivity data is extracted from seven core journals and this may include secondary citations. Potential duplicates are flagged when identical compound, target, activity, type and unit values are reported. The potential duplicates field is available on the interface and is found in the activities table of the database.

5) Variants:

Protein variation can change the affinity of drugs for targets. On the interface, variant proteins are recorded in the assay descriptions which can be used to check whether activities correspond to variant or 'wild-type' targets. The variant sequences table was added to the database in version 22 and is linked to the assays table through the variant ID. The variant ID can be used to include or exclude variants from assay results. Curation is underway to annotate additional variants from historical assays (more on this to follow).

Hopefully this provides an idea of some of the available data checks. Questions? Please get in touch on the Helpdesk or have a look through our training materials and FAQs.

Data checks using Imatinib as an example:


Popular posts from this blog

ChEMBL_27 SARS-CoV-2 release

The COVID-19 pandemic has resulted in an unprecedented effort across the global scientific community. Drug discovery groups are contributing in several ways, including the screening of compounds to identify those with potential anti-SARS-CoV-2 activity. When the compounds being assayed are marketed drugs or compounds in clinical development then this may identify potential repurposing opportunities (though there are many other factors to consider including safety and PK/PD considerations; see for example ). The results from such compound screening can also help inform and drive our understanding of the complex interplay between virus and host at different stages of infection. Several large-scale drug screening studies have now been described and made available as pre-prints or as peer-reviewed publications. The ChEMBL team has been following these developments with significant interest, and as a contr

LSH-based similarity search in MongoDB is faster than postgres cartridge.

TL;DR: In his excellent blog post , Matt Swain described the implementation of compound similarity searches in MongoDB . Unfortunately, Matt's approach had suboptimal ( polynomial ) time complexity with respect to decreasing similarity thresholds, which renders unsuitable for production environments. In this article, we improve on the method by enhancing it with Locality Sensitive Hashing algorithm, which significantly reduces query time and outperforms RDKit PostgreSQL cartridge . myChEMBL 21 - NoSQL edition    Given that NoSQL technologies applied to computational chemistry and cheminformatics are gaining traction and popularity, we decided to include a taster in future myChEMBL releases. Two especially appealing technologies are Neo4j and MongoDB . The former is a graph database and the latter is a BSON document storage. We would like to provide IPython notebook -based tutorials explaining how to use this software to deal with common cheminformatics p

Multi-task neural network on ChEMBL with PyTorch 1.0 and RDKit

  The use and application of multi-task neural networks is growing rapidly in cheminformatics and drug discovery. Examples can be found in the following publications: - Deep Learning as an Opportunity in VirtualScreening - Massively Multitask Networks for Drug Discovery - Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set But what is a multi-task neural network? In short, it's a kind of neural network architecture that can optimise multiple classification/regression problems at the same time while taking advantage of their shared description. This blogpost gives a great overview of their architecture. All networks in references above implement the hard parameter sharing approach. So, having a set of activities relating targets and molecules we can train a single neural network as a binary multi-label classifier that will output the probability of activity/inactivity for each of the targets (tasks) for a given q

FPSim2, a simple Python3 molecular similarity tool

FPSim2 is a new tool for fast similarity search on big compound datasets (>100 million) being developed at ChEMBL. We started developing it as we needed a Python3 library able to run either in memory or out-of-core fast similarity searches on such dataset sizes. It's written in Python/Cython and features: A fast population count algorithm (builtin-popcnt-unrolled) from using SIMD instructions. Bounds for sub-linear speed-ups from 10.1021/ci600358f A compressed file format with optimised read speed based in PyTables and BLOSC Use of multiple cores in a single search In memory and on disk search modes Simple and easy to use Source code is available on github and Conda packages are also available for either mac or linux. To install it type: conda install rdkit -c rdkit conda install fpsim2 -c efelix Try it with docker (much better performance than binder):     docker pull eloyfelix/fpsim2     docker run -p 9

RDKit, C++ and Jupyter Notebook

Fancy playing with RDKit C++ API without needing to set up a C++ project and compile it? But wait... isn't C++ a compiled programming language? How this can be even possible? Thanks to Cling (CERN's C++ interpreter) and xeus-cling jupyter kernel is possible to use C++ as an intepreted language inside a jupyter notebook! We prepared a simple notebook showing few examples of RDKit functionalities and a docker image in case you want to run it. With the single requirement of docker being installed in your computer you'll be able to easily run the examples following the three steps below: docker pull eloyfelix/rdkit_jupyter_cling docker run -d -p 9999:9999 eloyfelix/rdkit_jupyter_cling open  http://localhost:9999/notebooks/rdkit_cling.ipynb  in a browser