Skip to main content

Target prediction, QSAR and conformal prediction


https://jcheminf.biomedcentral.com/articles/10.1186/s13321-018-0325-4 

You know that in the ChEMBL group, we love to play with the data we collect!! Back in April 2014, we started to work on a target prediction tool.  Wow! This was almost 5 years ago! Since then, we have continued to update the tool for each new ChEMBL release, providing you with the actual models and the result of the prediction on the ChEMBL website for the drug molecules. The good news is that these target predictions are not dead and a successor is on its way!

First, we would like to introduce you some closely related work. You may have heard about conformal prediction (CP). If not, it is a machine learning framework developed to associate confidence to predictions. I personally consider this as a requirement for decision making. Basically, you train a model as you would do in QSAR but then you first predict a so-called calibration set, for which you know the actual values. For each of these observations you obtain two probabilities: one for the active and one for the inactive class (in a typical classification scheme). Now that you have this information, each time you predict a new compound you compare its probabilities to those of the calibration set (the non-conformity scores as they are called) and you derived p-values for each class. Based on your predefined significance level, the compound can be assigned in different categories: only active or only inactive, but also both active and inactive or none of them. I am sure you can start seeing here the added value of CP!

Here I have briefly detailed how it works for classification models but CP can also be applied to regression models. If you want to know more about conformal prediction, I recommend you to read this book and also this very nice example of the application in drug discovery. Having learnt how to build conformal predictors, we were intrigued to know how well they perform against traditional QSAR models with our ChEMBL data!

With this in mind, we decided to build a panel of models using a substantial data set from ChEMBL. With our new protocol, we were able to build models for 788 targets (550 of them human targets). For the descriptors we used RDKit Morgan fingerprint (2048 bits and radius 2) and 6 physicochemical descriptors. For the machine learning part we used the good old Random Forests as implemented in Scikit-learn version 0.19. For the QSAR models, this is all that is needed, but for CP you need a framework and this was provided by the very nice library provided by Henrik Linusson.

The next part consisted of training the models and checking their internal performance, but we went a bit further and decided that with our models trained on ChEMBL_23 data, it would be interesting to see how they perform with new data in ChEMBL_24 in a so-called temporal validation. All the details, results and conclusion are presented in the recently accepted article!
Image result for right wrong decision

The dataset for each target is already available here and you can find the models ready to use there.

Feel free to take a look and to share your opinion in the comments.

Now, you remember that I started this post mentioning our good old target predictors. So does it mean a new generation of ChEMBL models using conformal prediction is ready to be launched for our users? Well, unfortunately not yet, but stay tuned!

Comments

Popular posts from this blog

UniChem 2.0

UniChem new beta interface and web services We are excited to announce that our UniChem beta site will become the default one on the 11th of May. The new system will allow us to better maintain UniChem and to bring new functionality in a more sustainable way. The current interface and web services will still be reachable for a period of time at https://www.ebi.ac.uk/unichem/legacy . In addition to it, the most popular legacy REST endpoints will also remain implemented in the new web services: https://www.ebi.ac.uk/unichem/api/docs#/Legacy Some downtime is expected during the swap.  What's new? UniChem’s current API and web application is implemented with a framework version that’s not maintained and the cost of updating it surpasses the cost of rebuilding it. In order to improve stability, security, and support the implementation and fast delivery of new features, we have decided to revamp our user-facing systems using the latest version of widely used and maintained frameworks, i

A python client for accessing ChEMBL web services

Motivation The CheMBL Web Services provide simple reliable programmatic access to the data stored in ChEMBL database. RESTful API approaches are quite easy to master in most languages but still require writing a few lines of code. Additionally, it can be a challenging task to write a nontrivial application using REST without any examples. These factors were the motivation for us to write a small client library for accessing web services from Python. Why Python? We choose this language because Python has become extremely popular (and still growing in use) in scientific applications; there are several Open Source chemical toolkits available in this language, and so the wealth of ChEMBL resources and functionality of those toolkits can be easily combined. Moreover, Python is a very web-friendly language and we wanted to show how easy complex resource acquisition can be expressed in Python. Reinventing the wheel? There are already some libraries providing access to ChEMBL d

LSH-based similarity search in MongoDB is faster than postgres cartridge.

TL;DR: In his excellent blog post , Matt Swain described the implementation of compound similarity searches in MongoDB . Unfortunately, Matt's approach had suboptimal ( polynomial ) time complexity with respect to decreasing similarity thresholds, which renders unsuitable for production environments. In this article, we improve on the method by enhancing it with Locality Sensitive Hashing algorithm, which significantly reduces query time and outperforms RDKit PostgreSQL cartridge . myChEMBL 21 - NoSQL edition    Given that NoSQL technologies applied to computational chemistry and cheminformatics are gaining traction and popularity, we decided to include a taster in future myChEMBL releases. Two especially appealing technologies are Neo4j and MongoDB . The former is a graph database and the latter is a BSON document storage. We would like to provide IPython notebook -based tutorials explaining how to use this software to deal with common cheminformatics p

ChEMBL 30 released

  We are pleased to announce the release of ChEMBL 30. This version of the database, prepared on 22/02/2022 contains: 2,786,911 compound records 2,157,379 compounds (of which 2,136,187 have mol files) 19,286,751 activities 1,458,215 assays 14,855 targets 84,092 documents Data can be downloaded from the ChEMBL FTP site: https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_30/ Please see ChEMBL_30 release notes for full details of all changes in this release:  https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_30/chembl_30_release_notes.txt New Deposited Datasets EUbOPEN Chemogenomic Library (src_id = 55, ChEMBL Document ID CHEMBL4689842):   The EUbOPEN consortium is an Innovative Medicines Initiative (IMI) funded project to enable and unlock biology in the open. The aims of the project are to assemble an open access chemogenomic library comprising about 5,000 well annotated compounds covering roughly 1,000 different proteins, to synthesize at least

Multi-task neural network on ChEMBL with PyTorch 1.0 and RDKit

  Update: KNIME protocol with the model available thanks to Greg Landrum. Update: New code to train the model and ONNX exported trained models available in github . The use and application of multi-task neural networks is growing rapidly in cheminformatics and drug discovery. Examples can be found in the following publications: - Deep Learning as an Opportunity in VirtualScreening - Massively Multitask Networks for Drug Discovery - Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set But what is a multi-task neural network? In short, it's a kind of neural network architecture that can optimise multiple classification/regression problems at the same time while taking advantage of their shared description. This blogpost gives a great overview of their architecture. All networks in references above implement the hard parameter sharing approach. So, having a set of activities relating targets and molecules we can tra