Skip to main content

Mini-project: Training a DNN in Python and exporting it to the ONNX format to run its predictions in a C++ micro-service



Python is nowadays the favourite platform for many machine learning scientists. It is an easy to learn language that provides a vast number of nice data science and AI tools perfect for rapid prototyping.

Models trained using Python DNN libraries like PyTorch and Tensorflow usually perform well enough to be used for production runs, but there are some situations that require the predictions to be run in C++ i.e., when best performance is required or when the model needs to be integrated with an existing C++ codebase.

ONNX (Open Neural Network Exchange) is an AI framework designed to allow interoperability between ML/DL frameworks. It allows, for example, models trained in scikit-learn, PyTorch, TensorFlow and other popular frameworks to be converted to the "standard" ONNX format for later use in any programming language with an existing ONNX runtime.

In this mini-project we trained a "dummy" DNN network (single task target predictor) in Python with PyTorch, exported it to the ONNX format and set up a C++ REST micro-service that uses the previously trained model to compute the predictions.

Since the trickiest part in this mini-project is gluing all the components together, we'll leave you with the repository link so you can explore it yourselves :)
In the repository you will find:

- the Python script with its conda environment file and the data to reproduce the training + the model already trained and exported (both in ONNX and torchscript formats).
- a Dockerfile to set up the C++ REST micro-service that runs the predictions using the previously trained model.
- the Python script to prove that the prediction results of the PyTorch model and of the ONNX exported model running in both Python and C++ are the same.

Link to the repository

Comments

Popular posts from this blog

ChEMBL_27 SARS-CoV-2 release

The COVID-19 pandemic has resulted in an unprecedented effort across the global scientific community. Drug discovery groups are contributing in several ways, including the screening of compounds to identify those with potential anti-SARS-CoV-2 activity. When the compounds being assayed are marketed drugs or compounds in clinical development then this may identify potential repurposing opportunities (though there are many other factors to consider including safety and PK/PD considerations; see for example  https://www.medrxiv.org/content/10.1101/2020.04.16.20068379v1.full.pdf+html ). The results from such compound screening can also help inform and drive our understanding of the complex interplay between virus and host at different stages of infection. Several large-scale drug screening studies have now been described and made available as pre-prints or as peer-reviewed publications. The ChEMBL team has been following these developments with significant interest, and as a contr

LSH-based similarity search in MongoDB is faster than postgres cartridge.

TL;DR: In his excellent blog post , Matt Swain described the implementation of compound similarity searches in MongoDB . Unfortunately, Matt's approach had suboptimal ( polynomial ) time complexity with respect to decreasing similarity thresholds, which renders unsuitable for production environments. In this article, we improve on the method by enhancing it with Locality Sensitive Hashing algorithm, which significantly reduces query time and outperforms RDKit PostgreSQL cartridge . myChEMBL 21 - NoSQL edition    Given that NoSQL technologies applied to computational chemistry and cheminformatics are gaining traction and popularity, we decided to include a taster in future myChEMBL releases. Two especially appealing technologies are Neo4j and MongoDB . The former is a graph database and the latter is a BSON document storage. We would like to provide IPython notebook -based tutorials explaining how to use this software to deal with common cheminformatics p

ChEMBL 26 Released

We are pleased to announce the release of ChEMBL_26 This version of the database, prepared on 10/01/2020 contains: 2,425,876 compound records 1,950,765 compounds (of which 1,940,733 have mol files) 15,996,368 activities 1,221,311 assays 13,377 targets 76,076 documents You can query the ChEMBL 26 data online via the ChEMBL Interface and you can also download the data from the ChEMBL FTP site . Please see ChEMBL_26 release notes for full details of all changes in this release. Changes since the last release: * Deposited Data Sets: CO-ADD antimicrobial screening data: Two new data sets have been included from the Community for Open Access Drug Discovery (CO-ADD). These data sets are screening of the NIH NCI Natural Product Set III in the CO-ADD assays (src_id = 40, Document ChEMBL_ID = CHEMBL4296183, DOI = 10.6019/CHEMBL4296183) and screening of the NIH NCI Diversity Set V in the CO-ADD assays (src_id = 40, Document ChEMBL_ID = CHEMBL4296182, DOI = 10.601

Multi-task neural network on ChEMBL with PyTorch 1.0 and RDKit

  The use and application of multi-task neural networks is growing rapidly in cheminformatics and drug discovery. Examples can be found in the following publications: - Deep Learning as an Opportunity in VirtualScreening - Massively Multitask Networks for Drug Discovery - Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set But what is a multi-task neural network? In short, it's a kind of neural network architecture that can optimise multiple classification/regression problems at the same time while taking advantage of their shared description. This blogpost gives a great overview of their architecture. All networks in references above implement the hard parameter sharing approach. So, having a set of activities relating targets and molecules we can train a single neural network as a binary multi-label classifier that will output the probability of activity/inactivity for each of the targets (tasks) for a given q

ChEMBL Compound Curation Pipeline

At the end of last year we mentioned that we are now using RDKit for our compound structure processing (see here ). Most excitingly, as a part of this we have been working with Greg Landrum the developer of RDKit over the last year to reimplement our  curation pipeline using RDKit.  The pipeline includes three functions: 1. Check Identifies and validates problem structures before they are added to the database 2. Standardize Standardises chemical structures according to a set of predefined ChEMBL business rules  3. GetParent Generates parent structures of multi-component compounds based on a set of rules and defined list of salts and solvents We are now pleased to announce that we are making all the code from this project freely available in GitHub .  The functions can also now be used through our ChEMBL Beaker   API.  Live notebook with examples available here . For ChEMBL26 (shortly to be released) we have created new molfiles for all the ChEM