Skip to main content

Ligand-based target predictions in ChEMBL



In case you haven't noticed, ChEMBL_18 has arrived. As usual, it brings new additions, improvements and enhancements both on the data/annotation, as well as on the interface. One of the new features is the target predictions for small molecule drugs. If you go to the compound report card for such a drug, say imatinib or cabozantinib, and scroll down towards the bottom of the page, you'll see two tables with predicted single-protein targets, corresponding to the two models that we used for the predictions. 


 - So what are these models and how were they generated? 

They belong to the family of the so-called ligand-based target prediction methods. That means that the models are trained using ligand information only. Specifically, the model learns what substructural features (encoded as fingerprints) of ligands correlate with activity against a certain target and assign a score to each of these features. Given a new molecule with a new set of features, the model sums the individual feature scores for all the targets and comes up with a sorted list of likely targets with the highest scores. Ligand-based target prediction methods have been quite popular over the last years as they have been proved useful for target-deconvolution and mode-of-action prediction of phenotypic hits / orphan actives. See here for an example of such an approach and here for a comprehensive review.


 - OK, and how where they generated?

As usual, it all started with a carefully selected subset of ChEMBL_18 data containing pairs of compounds and single-protein targets. We used two activity cut-offs, namely 1uM and a more relaxed 10uM, which correspond to two models trained on bioactivity data against 1028 and 1244 targets respectively. KNIME and pandas were used for the data pre-processing. Morgan fingerprints (radius=2) were calculated using RDKit and then used to train a multinomial Naive Bayesian multi-category scikit-learn model. These models then were used to predict targets for the small molecule drugs as mentioned above. 


 - Any validation? 

Besides more trivial property predictions such as logP/logD, this is the first time ChEMBL hosts non experimental/measured data - so this is a big deal and we wanted to try and do this right. First of all, we did a 5-fold stratified cross-validation. But how do you assess a model with a many-to-many relationship between items (compounds) and categories (targets)? For each compound in each of the 5 20% test sets, we got the top 10 ranked predictions. We then checked whether these predictions agree with the known targets for that compound. Ideally, the known target should be correctly predicted at the 1st position of the ranked list, otherwise at the 2nd position, the 3rd and so on. By aggregating over all compounds of all test sets, you get this pie chart:


This means that a known target is correctly predicted by the model at the first attempt (Position 1 in the list of predicted targets) in ~69% of the cases. Actually, only 9% of compounds in the test sets had completely mis-predicted known targets within the top 10 predictions list (Found above 10). 

This is related to precision but what about recall of know targets? here's another chart:



This means that, on average, by considering the top 10 most likely target predictions (<1% of the target pool), the model can correctly predict around ~89% of a compound's known single protein targets. 

Finally, we compared the new open source approach (right) to an established one generated with a commercial workflow environment software (left) using the same data and very similar descriptors:


If you manage to ignore for a moment the slightly different colour coding, you'll see that their predictive performance is pretty much equivalent.

 - It all sounds good, but can I get predictions for my own compounds?

We could provide the models and examples in IPython Notebook on how to use these on another blog post that will follow soon. There are also plans for a publicly available target prediction web service, something like SMILES to predicted targets. Actually, if you would be interested in this, or if you have any feedback or suggestions for the target prediction functionality, let us know

George

Comments

Unknown said…
Very nice post, cheers!
Unknown said…
Any thoughts on the domain of validity in chemical space of these models? Do you expect them to work well across all of chembl, and if not can you specify what compounds they will fail on?
Unknown said…
Thank You for the very interesting work! I have some questions. First of all, i don't quite understand your validation technique. For example: a compound has 3 targets. Target 1 was found at the first position; target 2 was found at the second position and target 3 was not found in top 10 list of predictions. What did you do exactly in similar cases? Second, how many compounds are there in your training set?

Popular posts from this blog

ChEMBL_27 SARS-CoV-2 release

The COVID-19 pandemic has resulted in an unprecedented effort across the global scientific community. Drug discovery groups are contributing in several ways, including the screening of compounds to identify those with potential anti-SARS-CoV-2 activity. When the compounds being assayed are marketed drugs or compounds in clinical development then this may identify potential repurposing opportunities (though there are many other factors to consider including safety and PK/PD considerations; see for example https://www.medrxiv.org/content/10.1101/2020.04.16.20068379v1.full.pdf+html). The results from such compound screening can also help inform and drive our understanding of the complex interplay between virus and host at different stages of infection.
Several large-scale drug screening studies have now been described and made available as pre-prints or as peer-reviewed publications. The ChEMBL team has been following these developments with significant interest, and as a contribution t…

RDKit, C++ and Jupyter Notebook

Fancy playing with RDKit C++ API without needing to set up a C++ project and compile it? But wait... isn't C++ a compiled programming language? How this can be even possible?

Thanks to Cling (CERN's C++ interpreter) and xeus-cling jupyter kernel is possible to use C++ as an intepreted language inside a jupyter notebook!

We prepared a simple notebook showing few examples of RDKit functionalities and a docker image in case you want to run it.

With the single requirement of docker being installed in your computer you'll be able to easily run the examples following the three steps below:
docker pull eloyfelix/rdkit_jupyter_clingdocker run -d -p 9999:9999 eloyfelix/rdkit_jupyter_clingopen http://localhost:9999/notebooks/rdkit_cling.ipynb in a browser


FPSim2, a simple Python3 molecular similarity tool

FPSim2 is a new tool for fast similarity search on big compound datasets (>100 million) being developed at ChEMBL. We started developing it as we needed a Python3 library able to run either in memory or out-of-core fast similarity searches on such dataset sizes.

It's written in Python/Cython and features:
A fast population count algorithm (builtin-popcnt-unrolled) from https://github.com/WojciechMula/sse-popcount using SIMD instructions.Bounds for sub-linear speed-ups from 10.1021/ci600358fA compressed file format with optimised read speed based in PyTables and BLOSCUse of multiple cores in a single search In memory and on disk search modesSimple and easy to use
Source code is available on github and Conda packages are also available for either mac or linux. To install it type:

conda install rdkit -c rdkitconda install fpsim2 -c efelix
Try it with docker (much better performance than binder):

    docker pull eloyfelix/fpsim2    docker run -p 9999:9999 eloyfelix/fpsim2    open htt…

ChEMBL 25 and new web interface released

We are pleased to announce the release of ChEMBL 25 and our new web interface. This version of the database, prepared on 10/12/2018 contains:

2,335,417 compound records1,879,206 compounds (of which 1,870,461 have mol files)15,504,603 activities1,125,387 assays12,482 targets72,271 documents

Data can be downloaded from the ChEMBL ftp site: ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_25

Please see ChEMBL_25 release notes for full details of all changes in this release: ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_25/chembl_25_release_notes.txt


DATA CHANGES SINCE THE LAST RELEASE

# Deposited Data Sets:

Kuster Lab Chemical Proteomics Drug Profiling (src_id = 48, Document ChEMBL_ID = CHEMBL3991601):
Data have been included from the publication: The target landscape of clinical kinase drugs. Klaeger S, Heinzlmeir S and Wilhelm M et al (2017), Science, 358-6367 (https://doi.org/10.1126/science.aan4368)

# In Vivo Assay Classification:

A classification…

2019 and ChEMBL – News, jobs and birthdays

Happy New Year from the ChEMBL Group to all our users and collaborators. 
Firstly, do you want a new challenge in 2019?  If so, we have a position for a bioinformatician in the ChEMBL Team to develop pipelines for identifying links between therapeutic targets, drugs and diseases.  You will be based in the ChEMBL team but also work in collaboration with the exciting Open Targets initiative.  More details can be found here(closing date 24thJanuary). 
In case you missed it, we published a paper at the end of last on the latest developments of the ChEMBL database “ChEMBL: towards direct deposition of bioassay data”. You can read it here.  Highlights include bioactivity data from patents, human pharmacokinetic data from prescribing information, deposited data from neglected disease screening and data from the IMI funded K4DD project.  We have also added a lot of new annotations on the therapeutic targets and indications for clinical candidates and marketed drugs to ChEMBL.  Importantly we ha…