ChEMBL Resources

Resources:
ChEMBL
|
SureChEMBL
|
ChEMBL-NTD
|
ChEMBL-Malaria
|
The SARfaris: GPCR, Kinase, ADME
|
UniChem
|
DrugEBIlity
|
ECBD

Wednesday, 12 July 2017

Using autoencoders for molecule generation

Some time ago we found the following paper https://arxiv.org/abs/1610.02415 so we decided to take a look at it and train the described model using ChEMBL.

Lucky us, we also found two open source implementations of the model; the original authors one https://github.com/HIPS/molecule-autoencoder and https://github.com/maxhodak/keras-molecules. We decided to rely on the last one as the original author states that it might be easier to have greater success using it.

What is the paper about? It describes how molecules can be generated and specifically designed using autoencoders.

First of all we are going to give some simple and not very technical introduction for those that are not familiar with autoencoders and then go through a ipython notebook showing few examples of how to use it.

  1. Autoencoder introduction


Autoencoders are one of the many different and popular unsupervised deep learning algorithms used nowadays for many different fields and purposes. These work with two joint main blocks, an encoder and a decoder. Both blocks are made of neural networks.

In classical cryptography the cryptographer defines an encoding and decoding function to make the data impossible to read for those people that might intercept the message but do not have the decoding function. A classical example of this is the Caesar’s cipher https://en.wikipedia.org/wiki/Caesar_cipher .

However using autoencoders we don’t need to set up the encoding and decoding functions, this is just what the autoencoder do for us. We just need to set up the architecture of our autoencoder and the autoencoder will automatically learn the encoding and decoding functions minimizing a loss (also called cost or objective) function in a optimization algorithm. In an ideal world we would have a loss of 0.0, this would mean that all data we used as an input is perfectly reconstructed after the encoding. This is not usually the case :)

So, after the encoding phase we get a intermediate representation of the data (also called latent representation or code). This is why it’s said that autoencoders can learn a new representation of data.

Two most typical scenarios using autoencoders are:

  1. Dimensionality reduction: Setting up a bottleneck layer (layer in the middle) with lower dimensionality than the input layer we get a lower dimensionality representation of our data in the latent space. This can be somehow compared using classic PCA. Differences between using autoencoders and PCA is that PCA is purely linear, while autoencoders usually use non-linear transfer functions (multiple layers with relu, tanh, sigmoid... transfer functions). The optimal solution for an autoencoder using only linear transfer functions is strongly related to PCA. https://link.springer.com/article/10.1007%2FBF00332918

  1. Generative models: As the latent representation (representation after encoding phase) is just an n-dimensional array it can be really tempting to artificially generate n-dimensional arrays and try decode them in order to get new items (molecules!) based on the learnt representation. This is what we will achieve in the following example.

  1. Model training


Except RNN, most machine/deep learning approaches require of a fixed length vector as an input. The authors decided to take smiles no longer of 120 characters for a further one hot encoding representation to feed the model. This only left out less than 3% of molecules in ChEMBL. All of them above 1000 dalton.

We trained the autoencoder using the whole ChEMBL database (except that 3%), using a 80/20 train/test partition with a validation accuracy of 0.99.

  1. Example


As the code repository only provided a model trained with 500k ChEMBL 22 molecules and training a model against ChEMBL it’s a quite time expensive task we wanted to share with you the model we trained with whole ChEMBL 23 and a ipython notebook with some basic usage examples.

To run locally the notebook you just need to clone the repository, create a conda environment using the provided environment.yml file and run jupyter notebook.

cd autoencoder_ipython
conda env create -f environment.yml
jupyter notebook


The notebook covers simple usage of the model:

  • Encoding and decoding a molecule (aspirin) to check that the model is working properly.
  • Sampling latent space next to aspirin and getting auto-generated aspirin neighbours(figure 3a in original publication), validating the molecules and checking how many of them don’t exist in ChEMBL.
  • Interpolation between two molecules. Didn’t worked as well as in the paper.

  1. Other possible uses for this model


As stated on the original paper, this model can be also used to optimize a molecule given a desired property AlogP, QED...

Latent representations of molecules can be also used as structural molecular descriptors for target predictions algorithms.
Most popular target prediction algorithms are using fingerprints. Fingerprints have an obvious loss of structure information; molecule can’t be reconstructed from its fingerprint representation. As latent representation is saving all 2D molecule structural information in most cases (0.99 accuracy on ChEMBL test set)  we also believe that it may be used to improve fingerprint based target prediction algorithms accuracy.

Hope you enjoy it!

Eloy.


Wednesday, 5 July 2017

ChEMBL web services webinar 4pm 12th July

We are pleased to announce the next webinar in our ChEMBL webinar series:

ChEMBL, programmatically (part of the EMBL-EBI, programmatically: take a REST from manual searches webinar series) will be held at 4pm (BST) on 12th July.

The webinar will provide an overview of the ChEMBL API and its use, including how to execute API calls from the browser; where to find documentation; how to user filtering and pagination; available output formats; and scripting examples in Python, Bash and R.  We will also give examples of how the API can be used to create reusable web components and integrated into tools such as KNIME and Slack. The webinar will assume a degree of familiarity with the data in ChEMBL, so new users are advised that an introductory ChEMBL webinar is also available: https://www.ebi.ac.uk/training/online/course/chembl-walkthrough-webinar

To register for this webinar, please see http://ow.ly/hnxn30dlyMc

The ChEMBL Team

Friday, 23 June 2017

ChEMBL release 23, technical aspects.

ChEMBL release 23, technical aspects.


MV5BNDkwNzQ3Nzk4Nl5BMl5BanBnXkFtZTYwMjM0OTU5._V1_UX182_CR0,0,182,268_AL_.jpg

In this blog post, we would like to highlight some important technical improvements we've deployed as a part of the ChEMBL 23 release. You may find them useful if you work with ChEMBL data using FTP downloads and API.

1. FPS format support.


Many users download our SDF file containing all ChEMBL structures in order to compute fingerprints as an immediate next step. We decided to help them and publish precomputed fingerprints in a FPS text fingerprint format. The FPS format was developed by Andrew Dalke to "define and promote common file formats for storing and exchanging cheminformatics fingerprint data sets". It is used by chemfp, RDKit, OpenBabel and CACTVS and we believe it deserves promotion. The computed fingerprints are 2048 bit radius 2 morgan FPs, which we think is the most popular and generic type but please let us know in comments if other type can serve better. We are fully aware that fingerprint type can heavily depend on the specific application but it can be helpful for educational purposes and prototyping.

2. SQLite dump improvements.


As of release 21 we publish a SQLite dump which is an embedded file-based database. This proved to be very useful but as Andrew Dalke noticed on his blog, this dump wasn't optimised. We decided to follow Andrew's advice and pre-analyze tables during dump creation. We hope this will save you a few hours of computing. This is also a first ChEMBL release built with the support of our new Luigi-based pipelines. All the FTP files, including the schema image, SQL dumps and RDF data has been generated automatically using our python workflows. We are hoping to automate more and more parts of the release process, which should result in more frequent data releases and increased reproducibility in the long run.

3. API software updates


ChEMBL API is an open source project. This means that combining it with SQL dumps we provide, everyone can use it to create their own API instance. So far the biggest obstacle with integrating ChEMBL API software with other libraries was the fact, that the API was built on top of very old dependencies. For example, we were using Django 1.5, which was released about 5 years ago. We decided to upgrade the software making it compatible with the latest versions of most critical dependencies. After this change, the ChEMBL API software stack is now compatible with Django 1.11(.2) (which is the LTS edition), haystack 2.6.0, tastypie 0.13.3 and others. As a part of the upgrade process we also switched from virtualenv to conda as a default deployment environment which allowed us to easily install latest RDKit (2017.03.2 at the time of writing) and upgrade the Python interpreter itself (2.7.13). Using conda should make it easier to keep up with a future software updates as well, so from now on our software stack should always be using the latest stable dependencies.

All those changes should have a positive impact on the performance (more about that in the next paragraph), increase the security as well as compatibility with modern software stacks so it should be easier to integrate our software with your existing applications. Also, since all the dependencies are Python3-ready we are much closer to making a switch to Python 3.x. So far we migrated our API client library, which is compatible with both Python 2 and 3.

4. API performance improvements


The main reason to upgrade our software stack was improving the performance of our API. We decided to use Django Prefetch object (introduced in Django 1.7) to fine-tune SQL queries containing joins. We carefully analyzed all SQL queries generated by Django ORM when using the API. Introducing miniconda, which comes with precompiled binaries for python interpreter and libraries like numpy also had a positive impact on the performance, especially molecule images generation. We also added a full text search index on Assay description so now you can perform sophisticated full text queries. For example, searching for Activities, that have related assays with description containing 'TG-GATES', would look like this:


5. Extending Solr-based search


The above query can be rewritten to use Solr:


This query should be much faster than the one from the previous paragraph. We extended Solr indexes so now they cover 6 ChEMBL entities:


In total we have now indexed 17793020 solr documents. Some more example queries are:

A much more sophisticated query would be one that involves Solr-based search combined with DB-based filtering. For example, getting all assays that match 'inhibitor' in description and have assay type equal to 'A':


Such a "federated" query is quite heavy but we managed to optimise this use case. Still, please bear in mind that chaining search with too many filters may cause a timeout if the query is extremely complex.

6. Faster substructure search.


Our API provides the functionality to perform molecule substructure and similarity search. We noticed, that substructure search with the query being a small compound like benzene can lead to timeouts. We decided to enumerate all chemically important small structures and precache the results, which should improve substructure search performance. Please note, that this will have no impact on the speed of substructure search on our main web interface. This is because the interface is not using the API at the moment. We are developing a new API-based interface which should address this problem.

6. New API endpoints


Following new endpoints have been added to the API:

  • compound_record - records an occurrence of a molecule in a document
  • drug - provides information about approved drugs
  • organism - simple organism classification
  • target_prediction - target prediction results for clinical compounds, currently used on chembl user interface

7. Better API documentation


We updated the main API web page to reflect recent changes and we added a section with examples. GitHub repository has a new readme file as well and our PyPi packages point to the GitHub repo.

We also recently published a review paper, titled "Using ChEMBL web services for building applications and data processing workflows relevant to drug discovery". The document should be open and deposited into PubMedCentral (https://www.ncbi.nlm.nih.gov/pubmed/28602100) after six months.

8. Training


Please don't forget that we are organising a webinar on the 12th July about the API. More details will be announced soon.

9. Future plans


Our immediate future plans regarding the API are:
  • providing a Swagger-based documentation that can be used to generate a client code in any language
  • developing a better KNIME node
  • publishing a collection of reusable web components that consume the API





Tuesday, 30 May 2017

Post-doctoral positions

Two exciting post-doctoral projects are available via the ESPOD and EBPOD schemes between the European Bioinformatics Institute and respectively the Sanger Institute and the NIHR Cambridge Biomedical Research Centre (BRC). Post-doctoral fellows appointed via these schemes work on projects under the joint supervision of faculty members from EMBL-EBI and the Sanger or BRC as appropriate. Specifically:

(a) In collaboration with Mathew Garnet at the Sanger Institute, a project to exploit the potential of combining large-scale drug sensitivity screening platforms with the chemogenomics resources and expertise at the EBI.
Applications can be made via the relevant link here: http://www.ebi.ac.uk/research/postdocs/espods

(b) In collaboration with Vasilis Kosmoliaptsis in the Department of Surgery at Addenbrooke’s hospital, to capitalize on our greater understanding of the molecular basis of the immunological response and the ever-growing volumes of genetic and clinical outcomes data to develop new and improved methods for organ transplantation.
Applications via this page: http://www.ebi.ac.uk/research/postdocs/ebpods

Potential applicants should note that these fellowships are awarded after a competitive selection process; more details can be found at the above links or via arl@ebi.ac.uk. The closing date is 1st July.

Friday, 19 May 2017

ChEMBL_23 released


We are pleased to announce the release of ChEMBL_23. This release was prepared on 1st May 2017 and contains:

* 2,101,843 compound records
* 1,735,442 compounds (of which 1,727,112 have mol files)
* 14,675,320 activities
* 1,302,147 assays
* 11,538 targets
* 67,722 source documents

Data can be downloaded from the ChEMBL ftp site: ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_23

Please see ChEMBL_23 release notes for full details of all changes in this release: ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_23/chembl_23_release_notes.txt


DATA CHANGES SINCE THE LAST RELEASE

In addition to the regular updates to the Scientific Literature, FDA Orange Book and USP Dictionary of USAN and INN Investigational Drug Names and Clinical Candidates, this release of ChEMBL also includes the following new data:

Patent Bioactivity Data
With funding from the NIH Illuminating the Druggable Genome project (https://commonfund.nih.gov/idg), we have extracted bioactivity data relating to understudied druggable targets from a number of patent documents and added this data to ChEMBL.  

Curated Drug Pharmacokinetic Data
We have manually extracted pharmacokinetic parameters for approved drugs from DailyMed drug labels. 

Drug information from British National Formulary and ATC classification
We have now included compound records for drugs that are in the WHO ATC classification or the British National Formulary (BNF). Currently only BNF drugs that already exist in ChEMBL have been assigned compound records. In future releases we will add new BNF drugs to ChEMBL.

Deposited Data Sets
CO-ADD, The Community for Open Antimicrobial Drug Discovery (http://www.co-add.org), is a global open-access screening initiative launched in February 2015 to uncover significant and rich chemical diversity held outside of corporate screening collections. CO-ADD provides unencumbered free antimicrobial screening for any interested academic researcher.  CO-ADD has been recognised as a novel approach in the fight against superbugs by the Wellcome Trust, who have provided funding through their Strategic Awards initiative. Open Source Malaria (OSM) is aimed at finding new medicines for malaria using open source drug discovery, where all data and ideas are freely shared, there are no barriers to participation, and no restriction by patents. The initial set of deposited data from the CO-ADD project consists of OSM compounds screened in CO-ADD assays (DOI = 10.6019/CHEMBL3832881).

Modelled on the Malaria Box, the MMV Pathogen Box contains 400 diverse, drug-like molecules active against neglected diseases of interest and is available free of charge (http://www.pathogenbox.org). The Pathogen Box compounds are supplied in 96-well plates, containing 10​uL of a 10mM dimethyl sulfoxide (DMSO) solution of each compound. Upon request, researchers around the world will receive a Pathogen Box of molecules to help catalyse neglected disease drug discovery. In return, researchers are asked to share any data generated in the public domain within 2 years, creating an open and collaborative forum for neglected diseases drug research. The initial set of assay data provided by MMV has now been included in ChEMBL (DOI = 10.6019/CHEMBL3832761).


FORTHCOMING CHANGES

Schema changes will be made in ChEMBL_24 to accommodate more complex data types. Details of these changes will be released soon. Please follow the ChEMBL blog or sign up to the ChEMBL announce mailing list for details (http://listserver.ebi.ac.uk/mailman/listinfo/chembl-announce)

Changes will also be made in ChEMBL_24 to the way some of the physicochemical properties are calculated. Details of these changes will be announced soon.


Funding acknowledgements:

Work contributing to ChEMBL_23 was funded by the Wellcome Trust, EMBL Member States, Open Targets, National Institutes of Health (NIH) Common Fund, EU Innovative Medicines Initiative (IMI) and EU Framework 7 programmes. Please see https://www.ebi.ac.uk/chembl/funding for more details.


The ChEMBL Team


If you require further information about ChEMBL, please contact us: chembl-help@ebi.ac.uk

# To receive updates when new versions of ChEMBL are available, please sign up to our mailing list: http://listserver.ebi.ac.uk/mailman/listinfo/chembl-announce
# For general queries/feedback please email: chembl-help@ebi.ac.uk
# To report any problems with data content please email: chembl-data@ebi.ac.uk
# For details of upcoming webinars, please see: http://chembl.blogspot.com/search/label/Webinar

Wednesday, 5 April 2017

Technical internships at ChEMBL

Technical internships at ChEMBL.


Intern-image-960x658.png

We are looking for skilled Computer Science (and related fields) students with strong programming skills to join our team for 3-6 month internships. This is not necessarily a summer internship program, you can start whenever convenient for you after being accepted. Please take a look at some of the research ideas / candidate profiles below:

1. Java programmer -  we are looking for a person with experience in Java to develop a prototype of new KNIME nodes for interacting with the ChEMBL API. Experience with REST and/or KNIME is a plus but not a requirement - you can learn it during your internship. A very important thing to note that you should be excited about UX and creating user-friendly and pragmatic GUIs.

2. C++ programmer - we would like to invite a person passionate about C++ and pattern recognition / image processing to experiment with optimising the open-source OSRA code. OSRA is like OCR but for molecules. We want to make it faster and more accurate.

3. C++ programmer with a graph theory knowledge. Chemical compounds are represented as graphs in-silico. We want to be able to quickly generate random graphs that would also be valid compounds. Experience with distributed computing, computing grids, network file systems and map-reduce is a plus but not required.

4. JavaScript programmer - "any application that can be written in JavaScript, will eventually be written in JavaScript". This is why we are looking for a person with JS experience to experiment with:
  • Creating prototypes of reusable chemical web widgets using polymer.
  • Using emscripten to cross compile some core chemical software written in C++ to JS.
5. A person with a data visualisation skills to explore Kibana and Kibi tools to create beautiful and informative datavis widgets from ChEMBL data.

6. Someone with the Natural Language Processing background to:
  • Create a dictionary of common spelling mistakes in chemistry patents.
  • Create a network of patent relations using textrank algorithm.
  • Explore different approaches to the Named Entity Classification problem.

How to apply?


Just send your CV to kholmes @ ebi.ac.uk with 'ChEMBL Tech Internships' subject.

When to apply?


You can apply anytime but we will only contact selected candidates.

Will all those internships start at the same time?


No, in fact we are planning to select max. 2 most interesting candidates at a given time.

Will I get paid?


The internship is paid 800 GBP per month OR funded by your alma mater (whatever is better for you).