There's a paper just published in Nature getting a lot of coverage on the internet at the moment from Novartis/UCSF, and for good reason - but as the cartoon above states, it will probably have less impact than news on Justin Bieber's new haircut, or the latest handbags from Christian Lacroix. It uses the SEA target prediction method, trained using ChEMBL bioactivity data in order to predict new targets (and then by association side effects) for existing drugs. These are then experimentally tested, and the results confirmed in a number of cases - this experimental validation is clearly complex and expensive, so it is great news that in silico methods can start to generate realistic and testable hypotheses for adverse drug reactions (there are also positive side effects too, and these are pretty interesting to look for using these methods as well).
The use of SEA as the target prediction method was inevitable given the authors involved, but following up on some presentations at this springs National ACS meeting in San Diego. There would also seem to be clear benefits in including other methods into linking a compound to a target - nearest neighbour using simple Tanimoto measures, and naive Bayes/ECFPP type approaches. The advantage of the SEA approach is that it seems to generalise better (sorry I can't remember who gave the talk on this), and so probably can make more comprehensive/complete predictions, and be less tied to the training data (in this case ChEMBL) - however as databases grow, these predictions will get a lot better. There will also be big improvements possible if other data adopts the same basic data model as ChEMBL (or something like the services in OpenPHACTS), so methods can pool across different data sources, including proprietary in-house data.
There are probably papers being written right now about a tournament/consensus multi-method approach to target prediction using an ensemble of the above methods. (If such a paper uses random forests, and I get asked to review it, it will be carefully stored in /dev/null) ;)
So some things I think are useful improvements to this sort of approach.
1) Inclusion of the functional assays from ChEMBL in predictions (i.e. don't tie oneself to a simple molecular target assay). The big problem here though is that although pooling of target bioassay data is straightforward - pooling/clustering of functional data is not.
2) Where do you set affinity thresholds, and how do the affinities related to the pharmacodyamics of the side-effects. My view is that there will be some interesting analyses of ChEMBL that maybe, just maybe, allow one to address this issue. Remember, we know quite a lot about the exposure of the human body, to a given drug at a given dose level...
3) Consideration of (active) metabolites. It's pretty straightforward now to predict structures of likely metabolites (not at a quantitative level though) and this may be useful in drugs that are extensively metabolised in vivo.
Anyway, finish off with some eye-candy, a picture from the paper (hopefully allowed under fair use!).
And here's a reference to the paper, in good old Bell AT&T labs refer format - Mendeley-Schmendeley as my mother used to say when I was a boy.
%T Large-scale prediction and testing of drug activity on side-effect targets %A E. Lounkine %A M.J. Keiser %A S. Whitebread %A D. Mikhailov %A J. Hamon %A J.L. Jenkins %A P. Lavan %A E. Weber %A A.K. Doak %A S. Côté %A B.K. Shoichet %A L. Urban %J Nature %D 2012 %O doi:10.1038/nature11159
Comments