Update: KNIME protocol with the model available thanks to Greg Landrum.
Update: New code to train the model and ONNX exported trained models available in github.
The use and application of multi-task neural networks is growing rapidly in cheminformatics and drug discovery. Examples can be found in the following publications:
- Deep Learning as an Opportunity in VirtualScreening
- Massively Multitask Networks for Drug Discovery
- Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set
But what is a multi-task neural network? In short, it's a kind of neural network architecture that can optimise multiple classification/regression problems at the same time while taking advantage of their shared description. This blogpost gives a great overview of their architecture. All networks in references above implement the hard parameter sharing approach.
So, having a set of activities relating targets and molecules we can train a single neural network as a binary multi-label classifier that will output the probability of activity/inactivity for each of the targets (tasks) for a given query molecule.
PyTorch is one of the most popular open source AI libraries at present. It's getting a lot of traction in research environments, it's deeply integrated with the NumPy ecosystem and it also implements a dynamic graph approach making it easier to debug.
We have some interesting references, we have data in ChEMBL, we have PyTorch and RDKit... what are we waiting for?
First of all we'll need to extract the data from ChEMBL and format it for our purpose. The following notebook explains step by step how to do it. The output will be a H5 file that you can also download from here in case you want go directly to the network training phase.
Notebook to extract the data
Nice! We have the data, let's go then through the main notebook and train a model!
Notebook to train the model
This was a simple example. We hope you enjoyed it and will be inspired to experiment with deeper architectures, skipping connections, different learning rate strategies, more epochs, early stopping... and so on!
Notebooks also available in GitHub
Comments