Skip to main content

Using autoencoders for molecule generation

Some time ago we found the following paper https://arxiv.org/abs/1610.02415 so we decided to take a look at it and train the described model using ChEMBL.

Lucky us, we also found two open source implementations of the model; the original authors one https://github.com/HIPS/molecule-autoencoder and https://github.com/maxhodak/keras-molecules. We decided to rely on the last one as the original author states that it might be easier to have greater success using it.

What is the paper about? It describes how molecules can be generated and specifically designed using autoencoders.

First of all we are going to give some simple and not very technical introduction for those that are not familiar with autoencoders and then go through a ipython notebook showing few examples of how to use it.

  1. Autoencoder introduction


Autoencoders are one of the many different and popular unsupervised deep learning algorithms used nowadays for many different fields and purposes. These work with two joint main blocks, an encoder and a decoder. Both blocks are made of neural networks.
In classical cryptography the cryptographer defines an encoding and decoding function to make the data impossible to read for those people that might intercept the message but do not have the decoding function. A classical example of this is the Caesar’s cipher https://en.wikipedia.org/wiki/Caesar_cipher .

However using autoencoders we don’t need to set up the encoding and decoding functions, this is just what the autoencoder do for us. We just need to set up the architecture of our autoencoder and the autoencoder will automatically learn the encoding and decoding functions minimizing a loss (also called cost or objective) function in a optimization algorithm. In an ideal world we would have a loss of 0.0, this would mean that all data we used as an input is perfectly reconstructed after the encoding. This is not usually the case :)

So, after the encoding phase we get a intermediate representation of the data (also called latent representation or code). This is why it’s said that autoencoders can learn a new representation of data.

Two most typical scenarios using autoencoders are:

  1. Dimensionality reduction: Setting up a bottleneck layer (layer in the middle) with lower dimensionality than the input layer we get a lower dimensionality representation of our data in the latent space. This can be somehow compared using classic PCA. Differences between using autoencoders and PCA is that PCA is purely linear, while autoencoders usually use non-linear transfer functions (multiple layers with relu, tanh, sigmoid... transfer functions). The optimal solution for an autoencoder using only linear transfer functions is strongly related to PCA. https://link.springer.com/article/10.1007%2FBF00332918

  1. Generative models: As the latent representation (representation after encoding phase) is just an n-dimensional array it can be really tempting to artificially generate n-dimensional arrays and try decode them in order to get new items (molecules!) based on the learnt representation. This is what we will achieve in the following example.

  1. Model training


Except RNN, most machine/deep learning approaches require of a fixed length vector as an input. The authors decided to take smiles no longer of 120 characters for a further one hot encoding representation to feed the model. This only left out less than 3% of molecules in ChEMBL. All of them above 1000 dalton.

We trained the autoencoder using the whole ChEMBL database (except that 3%), using a 80/20 train/test partition with a validation accuracy of 0.99.

  1. Example


As the code repository only provided a model trained with 500k ChEMBL 22 molecules and training a model against ChEMBL it’s a quite time expensive task we wanted to share with you the model we trained with whole ChEMBL 23 and a ipython notebook with some basic usage examples.

To run locally the notebook you just need to clone the repository, create a conda environment using the provided environment.yml file and run jupyter notebook.

cd autoencoder_ipython
conda env create -f environment.yml
jupyter notebook


The notebook covers simple usage of the model:

  • Encoding and decoding a molecule (aspirin) to check that the model is working properly.
  • Sampling latent space next to aspirin and getting auto-generated aspirin neighbours(figure 3a in original publication), validating the molecules and checking how many of them don’t exist in ChEMBL.
  • Interpolation between two molecules. Didn’t worked as well as in the paper.

  1. Other possible uses for this model


As stated on the original paper, this model can be also used to optimize a molecule given a desired property AlogP, QED...

Latent representations of molecules can be also used as structural molecular descriptors for target predictions algorithms.
Most popular target prediction algorithms are using fingerprints. Fingerprints have an obvious loss of structure information; molecule can’t be reconstructed from its fingerprint representation. As latent representation is saving all 2D molecule structural information in most cases (0.99 accuracy on ChEMBL test set)  we also believe that it may be used to improve fingerprint based target prediction algorithms accuracy.

Hope you enjoy it!

Eloy.


Comments

Popular posts from this blog

RDKit, C++ and Jupyter Notebook

Fancy playing with RDKit C++ API without needing to set up a C++ project and compile it? But wait... isn't C++ a compiled programming language? How this can be even possible?

Thanks to Cling (CERN's C++ interpreter) and xeus-cling jupyter kernel is possible to use C++ as an intepreted language inside a jupyter notebook!

We prepared a simple notebook showing few examples of RDKit functionalities and a docker image in case you want to run it.

With the single requirement of docker being installed in your computer you'll be able to easily run the examples following the three steps below:
docker pull eloyfelix/rdkit_jupyter_clingdocker run -d -p 9999:9999 eloyfelix/rdkit_jupyter_clingopen http://localhost:9999/notebooks/rdkit_cling.ipynb in a browser


ChEMBL 25 and new web interface released

We are pleased to announce the release of ChEMBL 25 and our new web interface. This version of the database, prepared on 10/12/2018 contains:

2,335,417 compound records1,879,206 compounds (of which 1,870,461 have mol files)15,504,603 activities1,125,387 assays12,482 targets72,271 documents

Data can be downloaded from the ChEMBL ftp site: ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_25

Please see ChEMBL_25 release notes for full details of all changes in this release: ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_25/chembl_25_release_notes.txt


DATA CHANGES SINCE THE LAST RELEASE

# Deposited Data Sets:

Kuster Lab Chemical Proteomics Drug Profiling (src_id = 48, Document ChEMBL_ID = CHEMBL3991601):
Data have been included from the publication: The target landscape of clinical kinase drugs. Klaeger S, Heinzlmeir S and Wilhelm M et al (2017), Science, 358-6367 (https://doi.org/10.1126/science.aan4368)

# In Vivo Assay Classification:

A classification…

FPSim2, a simple Python3 molecular similarity tool

FPSim2 is a new tool for fast similarity search on big compound datasets (>100 million) being developed at ChEMBL. We started developing it as we needed a Python3 library able to run either in memory or out-of-core fast similarity searches on such dataset sizes.

It's written in Python/Cython and features:
A fast population count algorithm (builtin-popcnt-unrolled) from https://github.com/WojciechMula/sse-popcount using SIMD instructions.Bounds for sub-linear speed-ups from 10.1021/ci600358fA compressed file format with optimised read speed based in PyTables and BLOSCUse of multiple cores in a single search In memory and on disk search modesSimple and easy to use
Source code is available on github and Conda packages are also available for either mac or linux. To install it type:

conda install rdkit -c rdkitconda install fpsim2 -c efelix
Try it with docker (much better performance than binder):

    docker pull eloyfelix/fpsim2    docker run -p 9999:9999 eloyfelix/fpsim2    open htt…

2019 and ChEMBL – News, jobs and birthdays

Happy New Year from the ChEMBL Group to all our users and collaborators. 
Firstly, do you want a new challenge in 2019?  If so, we have a position for a bioinformatician in the ChEMBL Team to develop pipelines for identifying links between therapeutic targets, drugs and diseases.  You will be based in the ChEMBL team but also work in collaboration with the exciting Open Targets initiative.  More details can be found here(closing date 24thJanuary). 
In case you missed it, we published a paper at the end of last on the latest developments of the ChEMBL database “ChEMBL: towards direct deposition of bioassay data”. You can read it here.  Highlights include bioactivity data from patents, human pharmacokinetic data from prescribing information, deposited data from neglected disease screening and data from the IMI funded K4DD project.  We have also added a lot of new annotations on the therapeutic targets and indications for clinical candidates and marketed drugs to ChEMBL.  Importantly we ha…

ChEMBL_27 SARS-CoV-2 release

The COVID-19 pandemic has resulted in an unprecedented effort across the global scientific community. Drug discovery groups are contributing in several ways, including the screening of compounds to identify those with potential anti-SARS-CoV-2 activity. When the compounds being assayed are marketed drugs or compounds in clinical development then this may identify potential repurposing opportunities (though there are many other factors to consider including safety and PK/PD considerations; see for example https://www.medrxiv.org/content/10.1101/2020.04.16.20068379v1.full.pdf+html). The results from such compound screening can also help inform and drive our understanding of the complex interplay between virus and host at different stages of infection.
Several large-scale drug screening studies have now been described and made available as pre-prints or as peer-reviewed publications. The ChEMBL team has been following these developments with significant interest, and as a contribution t…