One year ago we published a new version of our target prediction models and since then we've been working on its implementation for the upcoming ChEMBL 26 release.
What did we do?
First of all we re-trained the models with the LightGBM library instead of using scikit-learn. By doing this and tuning a bit the parameters our prediction timing improved by 2 orders of magnitude while keeping comparable prediction power. Having quicker models allowed us to easily implement a simple web service providing real time predictions.
Since we are currently migration to a more sustainable Kubernetes infrastructure it made sense to us to directly write the small target prediction web service as a cloud native app. We then decided to give OpenFaaS a try as a platform to deploy machine learning models.
OpenFaaS is a framework for building serverless functions with Docker and Kubernetes. It provides templates for deploying functions as REST endpoints in many different programming languages (Python, Node, Java, Ruby, go...).
Our target predicitons OpenFaaS function source code is now available in our github repository. A Docker image with ready to use ChEMBL 25 trained models is also available here.
Does this mean that you won't be able to use the models without an Kubernetes/OpenFaaS installation? No way! It is also easy to start an instance in your local machine:
docker run -p 8080:8080 chembl/mcp:25
# in a different shell
curl -X POST -H 'Accept: */*' -H 'Content-Type: application/json' -d '{"smiles": "CC(=O)Oc1ccccc1C(=O)O"}' http://127.0.0.1:8080
Bear in mind that the service needs to load the models into memory, so it may take few minutes until it returns predictions. The predictions returned by the service are the ones for the models with CCR ((sensitivity + specificity) / 2) >= 0.85
Comments
docker run -p 8080:8080 chembl/mcp
Forking - python [index.py]
2020/02/06 10:54:02 Started logging stderr from function.
2020/02/06 10:54:02 Started logging stdout from function.
2020/02/06 10:54:02 OperationalMode: http
2020/02/06 10:54:02 Timeouts: read: 10s, write: 10s hard: 10s.
2020/02/06 10:54:02 Listening on port: 8080
2020/02/06 10:54:02 Writing lock-file to: /tmp/.lock
2020/02/06 10:54:02 Metrics listening on port: 8081
2020/02/06 10:54:31 Upstream HTTP request error: Post http://127.0.0.1:5000/: dial tcp 127.0.0.1:5000: connect: connection refused
2020/02/06 10:54:46 Forked function has terminated: signal: killed
when I try this in another Terminal window
curl -X POST -H 'Accept: */*' -H 'Content-Type: application/json' -d '{"smiles": "CC(=O)Oc1ccccc1C(=O)O"}' http://127.0.0.1:8080
Are you using Docker on Windows or Mac?
It's default config (Docker on Windows and Mac actually runs inside a tiny VM) allows it only to use 2GB of RAM and it looks like it's killing the container process because Docker runs out of memory when loading the models.
You'll need to change Docker config to allow it to use 8GB of system memory.
Kind regards,
Eloy