Skip to main content

Using autoencoders for molecule generation

Some time ago we found the following paper https://arxiv.org/abs/1610.02415 so we decided to take a look at it and train the described model using ChEMBL.

Lucky us, we also found two open source implementations of the model; the original authors one https://github.com/HIPS/molecule-autoencoder and https://github.com/maxhodak/keras-molecules. We decided to rely on the last one as the original author states that it might be easier to have greater success using it.

What is the paper about? It describes how molecules can be generated and specifically designed using autoencoders.

First of all we are going to give some simple and not very technical introduction for those that are not familiar with autoencoders and then go through a ipython notebook showing few examples of how to use it.

  1. Autoencoder introduction


Autoencoders are one of the many different and popular unsupervised deep learning algorithms used nowadays for many different fields and purposes. These work with two joint main blocks, an encoder and a decoder. Both blocks are made of neural networks.
In classical cryptography the cryptographer defines an encoding and decoding function to make the data impossible to read for those people that might intercept the message but do not have the decoding function. A classical example of this is the Caesar’s cipher https://en.wikipedia.org/wiki/Caesar_cipher .

However using autoencoders we don’t need to set up the encoding and decoding functions, this is just what the autoencoder do for us. We just need to set up the architecture of our autoencoder and the autoencoder will automatically learn the encoding and decoding functions minimizing a loss (also called cost or objective) function in a optimization algorithm. In an ideal world we would have a loss of 0.0, this would mean that all data we used as an input is perfectly reconstructed after the encoding. This is not usually the case :)

So, after the encoding phase we get a intermediate representation of the data (also called latent representation or code). This is why it’s said that autoencoders can learn a new representation of data.

Two most typical scenarios using autoencoders are:

  1. Dimensionality reduction: Setting up a bottleneck layer (layer in the middle) with lower dimensionality than the input layer we get a lower dimensionality representation of our data in the latent space. This can be somehow compared using classic PCA. Differences between using autoencoders and PCA is that PCA is purely linear, while autoencoders usually use non-linear transfer functions (multiple layers with relu, tanh, sigmoid... transfer functions). The optimal solution for an autoencoder using only linear transfer functions is strongly related to PCA. https://link.springer.com/article/10.1007%2FBF00332918

  1. Generative models: As the latent representation (representation after encoding phase) is just an n-dimensional array it can be really tempting to artificially generate n-dimensional arrays and try decode them in order to get new items (molecules!) based on the learnt representation. This is what we will achieve in the following example.

  1. Model training


Except RNN, most machine/deep learning approaches require of a fixed length vector as an input. The authors decided to take smiles no longer of 120 characters for a further one hot encoding representation to feed the model. This only left out less than 3% of molecules in ChEMBL. All of them above 1000 dalton.

We trained the autoencoder using the whole ChEMBL database (except that 3%), using a 80/20 train/test partition with a validation accuracy of 0.99.

  1. Example


As the code repository only provided a model trained with 500k ChEMBL 22 molecules and training a model against ChEMBL it’s a quite time expensive task we wanted to share with you the model we trained with whole ChEMBL 23 and a ipython notebook with some basic usage examples.

To run locally the notebook you just need to clone the repository, create a conda environment using the provided environment.yml file and run jupyter notebook.

cd autoencoder_ipython
conda env create -f environment.yml
jupyter notebook


The notebook covers simple usage of the model:

  • Encoding and decoding a molecule (aspirin) to check that the model is working properly.
  • Sampling latent space next to aspirin and getting auto-generated aspirin neighbours(figure 3a in original publication), validating the molecules and checking how many of them don’t exist in ChEMBL.
  • Interpolation between two molecules. Didn’t worked as well as in the paper.

  1. Other possible uses for this model


As stated on the original paper, this model can be also used to optimize a molecule given a desired property AlogP, QED...

Latent representations of molecules can be also used as structural molecular descriptors for target predictions algorithms.
Most popular target prediction algorithms are using fingerprints. Fingerprints have an obvious loss of structure information; molecule can’t be reconstructed from its fingerprint representation. As latent representation is saving all 2D molecule structural information in most cases (0.99 accuracy on ChEMBL test set)  we also believe that it may be used to improve fingerprint based target prediction algorithms accuracy.

Hope you enjoy it!

Eloy.


Comments

Popular posts from this blog

A python client for accessing ChEMBL web services

Motivation The CheMBL Web Services provide simple reliable programmatic access to the data stored in ChEMBL database. RESTful API approaches are quite easy to master in most languages but still require writing a few lines of code. Additionally, it can be a challenging task to write a nontrivial application using REST without any examples. These factors were the motivation for us to write a small client library for accessing web services from Python. Why Python? We choose this language because Python has become extremely popular (and still growing in use) in scientific applications; there are several Open Source chemical toolkits available in this language, and so the wealth of ChEMBL resources and functionality of those toolkits can be easily combined. Moreover, Python is a very web-friendly language and we wanted to show how easy complex resource acquisition can be expressed in Python. Reinventing the wheel? There are already some libraries providing access to ChEMBL d

ChEMBL 29 Released

  We are pleased to announce the release of ChEMBL 29. This version of the database, prepared on 01/07/2021 contains: 2,703,543 compound records 2,105,464 compounds (of which 2,084,724 have mol files) 18,635,916 activities 1,383,553 assays 14,554 targets 81,544 documents Data can be downloaded from the ChEMBL FTP site:   https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_29 .  Please see ChEMBL_29 release notes for full details of all changes in this release: https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_29/chembl_29_release_notes.txt New Deposited Datasets EUbOPEN Chemogenomic Library (src_id = 55, ChEMBL Document IDs CHEMBL4649982-CHEMBL4649998): The EUbOPEN consortium is an Innovative Medicines Initiative (IMI) funded project to enable and unlock biology in the open. The aims of the project are to assemble an open access chemogenomic library comprising about 5,000 well annotated compounds covering roughly 1,000 different proteins, to synthesiz

Identifying relevant compounds in patents

  As you may know, patents can be inherently noisy documents which can make it challenging to extract drug discovery information from them, such as the key targets or compounds being claimed. There are many reasons for this, ranging from deliberate obfuscation through to the long and detailed nature of the documents. For example, a typical small molecule patent may contain extensive background information relating to the target biology and disease area, chemical synthesis information, biological assay protocols and pharmacological measurements (which may refer to endogenous substances, existing therapies, reaction intermediates, reagents and reference compounds), in addition to description of the claimed compounds themselves.  The SureChEMBL system extracts this chemical information from patent documents through recognition of chemical names, conversion of images and extraction of attached files, and allows patents to be searched for chemical structures of interest. However, the curren

Julia meets RDKit

Julia is a young programming language that is getting some traction in the scientific community. It is a dynamically typed, memory safe and high performance JIT compiled language that was designed to replace languages such as Matlab, R and Python. We've been keeping an an eye on it for a while but we were missing something... yes, RDKit! Fortunately, Greg very recently added the MinimalLib CFFI interface to the RDKit repertoire. This is nothing else than a C API that makes it very easy to call RDKit from almost any programming language. More information about the MinimalLib is available directly from the source . The existence of this MinimalLib CFFI interface meant that we no longer had an excuse to not give it a go! First, we added a BinaryBuilder recipe for building RDKit's MinimalLib into Julia's Yggdrasil repository (thanks Mosè for reviewing!). The recipe builds and automatically uploads the library to Julia's general package registry. The build currently targe

New Drug Warnings Browser

As mentioned in the announcement post of  ChEMBL 29 , a new Drug Warnings Browser has been created. This is an updated version of the entity browsers in ChEMBL ( Compounds , Targets , Activities , etc). It contains new features that will be tried out with the Drug Warnings and will be applied to the other entities gradually. The new features of the Drug Warnings Browser are described below. More visible buttons to link to other entities This functionality is already available in the old entity browsers, but the button to use it is not easily recognised. In the new version, the buttons are more visible. By using those buttons, users can see the related activities, compounds, drugs, mechanisms of action and drug indications to the drug warnings selected. The page will take users to the corresponding entity browser with the items related to the ones selected, or to all the items in the dataset if the user didn’t select any. Additionally, the process of creating the join query is no