Skip to main content

Drug Side Effect Prediction and Validation


There's a paper just published in Nature getting a lot of coverage on the internet at the moment from Novartis/UCSF, and for good reason - but as the cartoon above states, it will probably have less impact than news on Justin Bieber's new haircut, or the latest handbags from Christian Lacroix. It uses the SEA target prediction method, trained using ChEMBL bioactivity data in order to predict new targets (and then by association side effects) for existing drugs. These are then experimentally tested, and the results confirmed in a number of cases - this experimental validation is clearly complex and expensive, so it is great news that in silico methods can start to generate realistic and testable hypotheses for adverse drug reactions (there are also positive side effects too, and these are pretty interesting to look for using these methods as well).

The use of SEA as the target prediction method was inevitable given the authors involved, but following up on some presentations at this springs National ACS meeting in San Diego. There would also seem to be clear benefits in including other methods into linking a compound to a target - nearest neighbour using simple Tanimoto measures, and naive Bayes/ECFPP type approaches. The advantage of the SEA approach is that it seems to generalise better (sorry I can't remember who gave the talk on this), and so probably can make more comprehensive/complete predictions, and be less tied to the training data (in this case ChEMBL) - however as databases grow, these predictions will get a lot better. There will also be big improvements possible if other data adopts the same basic data model as ChEMBL (or something like the services in OpenPHACTS), so methods can pool across different data sources, including proprietary in-house data.

There are probably papers being written right now about a tournament/consensus multi-method approach to target prediction using an ensemble of the above methods. (If such a paper uses random forests, and I get asked to review it, it will be carefully stored in /dev/null) ;)

So some things I think are useful improvements to this sort of approach.

1) Inclusion of the functional assays from ChEMBL in predictions (i.e. don't tie oneself to a simple molecular target assay). The big problem here though is that although pooling of target bioassay data is straightforward - pooling/clustering of functional data is not.
2) Where do you set affinity thresholds, and how do the affinities related to the pharmacodyamics of the side-effects. My view is that there will be some interesting analyses of ChEMBL that maybe, just maybe, allow one to address this issue. Remember, we know quite a lot about the exposure of the human body, to  a given drug at a given dose level...
3) Consideration of (active) metabolites. It's pretty straightforward now to predict structures of likely metabolites (not at a quantitative level though) and this may be useful in drugs that are extensively metabolised in vivo.

Anyway, finish off with some eye-candy, a picture from the paper (hopefully allowed under fair use!).


And here's a reference to the paper, in good old Bell AT&T labs refer format - Mendeley-Schmendeley as my mother used to say when I was a boy.

%T Large-scale prediction and testing of drug activity on side-effect targets
%A E. Lounkine
%A M.J. Keiser
%A S. Whitebread
%A D. Mikhailov
%A J. Hamon
%A J.L. Jenkins
%A P. Lavan
%A E. Weber
%A A.K. Doak
%A S. Côté
%A B.K. Shoichet
%A L. Urban
%J Nature
%D 2012
%O doi:10.1038/nature11159

Comments

Popular posts from this blog

ChEMBL_27 SARS-CoV-2 release

The COVID-19 pandemic has resulted in an unprecedented effort across the global scientific community. Drug discovery groups are contributing in several ways, including the screening of compounds to identify those with potential anti-SARS-CoV-2 activity. When the compounds being assayed are marketed drugs or compounds in clinical development then this may identify potential repurposing opportunities (though there are many other factors to consider including safety and PK/PD considerations; see for example  https://www.medrxiv.org/content/10.1101/2020.04.16.20068379v1.full.pdf+html ). The results from such compound screening can also help inform and drive our understanding of the complex interplay between virus and host at different stages of infection. Several large-scale drug screening studies have now been described and made available as pre-prints or as peer-reviewed publications. The ChEMBL team has been following these developments with significant interest, and as a contr

LSH-based similarity search in MongoDB is faster than postgres cartridge.

TL;DR: In his excellent blog post , Matt Swain described the implementation of compound similarity searches in MongoDB . Unfortunately, Matt's approach had suboptimal ( polynomial ) time complexity with respect to decreasing similarity thresholds, which renders unsuitable for production environments. In this article, we improve on the method by enhancing it with Locality Sensitive Hashing algorithm, which significantly reduces query time and outperforms RDKit PostgreSQL cartridge . myChEMBL 21 - NoSQL edition    Given that NoSQL technologies applied to computational chemistry and cheminformatics are gaining traction and popularity, we decided to include a taster in future myChEMBL releases. Two especially appealing technologies are Neo4j and MongoDB . The former is a graph database and the latter is a BSON document storage. We would like to provide IPython notebook -based tutorials explaining how to use this software to deal with common cheminformatics p

Multi-task neural network on ChEMBL with PyTorch 1.0 and RDKit

  The use and application of multi-task neural networks is growing rapidly in cheminformatics and drug discovery. Examples can be found in the following publications: - Deep Learning as an Opportunity in VirtualScreening - Massively Multitask Networks for Drug Discovery - Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set But what is a multi-task neural network? In short, it's a kind of neural network architecture that can optimise multiple classification/regression problems at the same time while taking advantage of their shared description. This blogpost gives a great overview of their architecture. All networks in references above implement the hard parameter sharing approach. So, having a set of activities relating targets and molecules we can train a single neural network as a binary multi-label classifier that will output the probability of activity/inactivity for each of the targets (tasks) for a given q

FPSim2, a simple Python3 molecular similarity tool

FPSim2 is a new tool for fast similarity search on big compound datasets (>100 million) being developed at ChEMBL. We started developing it as we needed a Python3 library able to run either in memory or out-of-core fast similarity searches on such dataset sizes. It's written in Python/Cython and features: A fast population count algorithm (builtin-popcnt-unrolled) from https://github.com/WojciechMula/sse-popcount using SIMD instructions. Bounds for sub-linear speed-ups from 10.1021/ci600358f A compressed file format with optimised read speed based in PyTables and BLOSC Use of multiple cores in a single search In memory and on disk search modes Simple and easy to use Source code is available on github and Conda packages are also available for either mac or linux. To install it type: conda install rdkit -c rdkit conda install fpsim2 -c efelix Try it with docker (much better performance than binder):     docker pull eloyfelix/fpsim2     docker run -p 9

RDKit, C++ and Jupyter Notebook

Fancy playing with RDKit C++ API without needing to set up a C++ project and compile it? But wait... isn't C++ a compiled programming language? How this can be even possible? Thanks to Cling (CERN's C++ interpreter) and xeus-cling jupyter kernel is possible to use C++ as an intepreted language inside a jupyter notebook! We prepared a simple notebook showing few examples of RDKit functionalities and a docker image in case you want to run it. With the single requirement of docker being installed in your computer you'll be able to easily run the examples following the three steps below: docker pull eloyfelix/rdkit_jupyter_cling docker run -d -p 9999:9999 eloyfelix/rdkit_jupyter_cling open  http://localhost:9999/notebooks/rdkit_cling.ipynb  in a browser