Skip to main content

Using autoencoders for molecule generation

Some time ago we found the following paper https://arxiv.org/abs/1610.02415 so we decided to take a look at it and train the described model using ChEMBL.

Lucky us, we also found two open source implementations of the model; the original authors one https://github.com/HIPS/molecule-autoencoder and https://github.com/maxhodak/keras-molecules. We decided to rely on the last one as the original author states that it might be easier to have greater success using it.

What is the paper about? It describes how molecules can be generated and specifically designed using autoencoders.

First of all we are going to give some simple and not very technical introduction for those that are not familiar with autoencoders and then go through a ipython notebook showing few examples of how to use it.

  1. Autoencoder introduction


Autoencoders are one of the many different and popular unsupervised deep learning algorithms used nowadays for many different fields and purposes. These work with two joint main blocks, an encoder and a decoder. Both blocks are made of neural networks.
In classical cryptography the cryptographer defines an encoding and decoding function to make the data impossible to read for those people that might intercept the message but do not have the decoding function. A classical example of this is the Caesar’s cipher https://en.wikipedia.org/wiki/Caesar_cipher .

However using autoencoders we don’t need to set up the encoding and decoding functions, this is just what the autoencoder do for us. We just need to set up the architecture of our autoencoder and the autoencoder will automatically learn the encoding and decoding functions minimizing a loss (also called cost or objective) function in a optimization algorithm. In an ideal world we would have a loss of 0.0, this would mean that all data we used as an input is perfectly reconstructed after the encoding. This is not usually the case :)

So, after the encoding phase we get a intermediate representation of the data (also called latent representation or code). This is why it’s said that autoencoders can learn a new representation of data.

Two most typical scenarios using autoencoders are:

  1. Dimensionality reduction: Setting up a bottleneck layer (layer in the middle) with lower dimensionality than the input layer we get a lower dimensionality representation of our data in the latent space. This can be somehow compared using classic PCA. Differences between using autoencoders and PCA is that PCA is purely linear, while autoencoders usually use non-linear transfer functions (multiple layers with relu, tanh, sigmoid... transfer functions). The optimal solution for an autoencoder using only linear transfer functions is strongly related to PCA. https://link.springer.com/article/10.1007%2FBF00332918

  1. Generative models: As the latent representation (representation after encoding phase) is just an n-dimensional array it can be really tempting to artificially generate n-dimensional arrays and try decode them in order to get new items (molecules!) based on the learnt representation. This is what we will achieve in the following example.

  1. Model training


Except RNN, most machine/deep learning approaches require of a fixed length vector as an input. The authors decided to take smiles no longer of 120 characters for a further one hot encoding representation to feed the model. This only left out less than 3% of molecules in ChEMBL. All of them above 1000 dalton.

We trained the autoencoder using the whole ChEMBL database (except that 3%), using a 80/20 train/test partition with a validation accuracy of 0.99.

  1. Example


As the code repository only provided a model trained with 500k ChEMBL 22 molecules and training a model against ChEMBL it’s a quite time expensive task we wanted to share with you the model we trained with whole ChEMBL 23 and a ipython notebook with some basic usage examples.

To run locally the notebook you just need to clone the repository, create a conda environment using the provided environment.yml file and run jupyter notebook.

cd autoencoder_ipython
conda env create -f environment.yml
jupyter notebook


The notebook covers simple usage of the model:

  • Encoding and decoding a molecule (aspirin) to check that the model is working properly.
  • Sampling latent space next to aspirin and getting auto-generated aspirin neighbours(figure 3a in original publication), validating the molecules and checking how many of them don’t exist in ChEMBL.
  • Interpolation between two molecules. Didn’t worked as well as in the paper.

  1. Other possible uses for this model


As stated on the original paper, this model can be also used to optimize a molecule given a desired property AlogP, QED...

Latent representations of molecules can be also used as structural molecular descriptors for target predictions algorithms.
Most popular target prediction algorithms are using fingerprints. Fingerprints have an obvious loss of structure information; molecule can’t be reconstructed from its fingerprint representation. As latent representation is saving all 2D molecule structural information in most cases (0.99 accuracy on ChEMBL test set)  we also believe that it may be used to improve fingerprint based target prediction algorithms accuracy.

Hope you enjoy it!

Eloy.


Comments

Popular posts from this blog

Here's a nice Christmas gift - ChEMBL 35 is out!

Use your well-deserved Christmas holidays to spend time with your loved ones and explore the new release of ChEMBL 35!            This fresh release comes with a wealth of new data sets and some new data sources as well. Examples include a total of 14 datasets deposited by by the ASAP ( AI-driven Structure-enabled Antiviral Platform) project, a new NTD data se t by Aberystwyth University on anti-schistosome activity, nine new chemical probe data sets, and seven new data sets for the Chemogenomic library of the EUbOPEN project. We also inlcuded a few new fields that do impr ove the provenance and FAIRness of the data we host in ChEMBL:  1) A CONTACT field has been added to the DOCs table which should contain a contact profile of someone willing to be contacted about details of the dataset (ideally an ORCID ID; up to 3 contacts can be provided). 2) In an effort to provide more detailed information about the source of a deposited dat...

Improvements in SureChEMBL's chemistry search and adoption of RDKit

    Dear SureChEMBL users, If you frequently rely on our "chemistry search" feature, today brings great news! We’ve recently implemented a major update that makes your search experience faster than ever. What's New? Last week, we upgraded our structure search engine by aligning it with the core code base used in ChEMBL . This update allows SureChEMBL to leverage our FPSim2 Python package , returning results in approximately one second. The similarity search relies on 256-bit RDKit -calculated ECFP4 fingerprints, and a single instance requires approximately 1 GB of RAM to run. SureChEMBL’s FPSim2 file is not currently available for download, but we are considering generating it periodicaly and have created it once for you to try in Google Colab ! For substructure searches, we now also use an RDKit -based solution via SubstructLibrary , which returns results several times faster than our previous implementation. Additionally, structure search results are now sorted by...

ChEMBL 34 is out!

We are delighted to announce the release of ChEMBL 34, which includes a full update to drug and clinical candidate drug data. This version of the database, prepared on 28/03/2024 contains:         2,431,025 compounds (of which 2,409,270 have mol files)         3,106,257 compound records (non-unique compounds)         20,772,701 activities         1,644,390 assays         15,598 targets         89,892 documents Data can be downloaded from the ChEMBL FTP site:  https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_34/ Please see ChEMBL_34 release notes for full details of all changes in this release:  https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_34/chembl_34_release_notes.txt New Data Sources European Medicines Agency (src_id = 66): European Medicines Agency's data correspond to EMA drugs prior to 20 January 2023 (excluding ...

Improved querying for SureChEMBL

    Dear SureChEMBL users, Earlier this year we ran a survey to identify what you, the users, would like to see next in SureChEMBL. Thank you for offering your feedback! This gave us the opportunity to have some interesting discussions both internally and externally. While we can't publicly reveal precisely our plans for the coming months (everything will be delivered at the right time), we can at least say that improving the compound structure extraction quality is a priority. Unfortunately, the change won't happen overnight as reprocessing 167 millions patents takes a while. However, the good news is that the new generation of optical chemical structure recognition shows good performance, even for patent images! We hope we can share our results with you soon. So in the meantime, what are we doing? You may have noticed a few changes on the SureChEMBL main page. No more "Beta" flag since we consider the system to be stable enough (it does not mean that you will never ...

ChEMBL brings drug bioactivity data to the Protein Data Bank in Europe

In the quest to develop new drugs, understanding the 3D structure of molecules is crucial. Resources like the Protein Data Bank in Europe (PDBe) and the Cambridge Structural Database (CSD) provide these 3D blueprints for many biological molecules. However, researchers also need to know how these molecules interact with their biological target – their bioactivity. ChEMBL is a treasure trove of bioactivity data for countless drug-like molecules. It tells us how strongly a molecule binds to a target, how it affects a biological process, and even how it might be metabolized. But here's the catch: while ChEMBL provides extensive information on a molecule's activity and cross references to other data sources, it doesn't always tell us if a 3D structure is available for a specific drug-target complex. This can be a roadblock for researchers who need that structural information to design effective drugs. Therefore, connecting ChEMBL data with resources like PDBe and CSD is essen...