Skip to main content

FPSim2 v0.2.0

One Woman Has To Know: Is This A Doodle Or Fried Chicken?

FPSim2 is the fast Python3 similarity search tool we are currently using in the ChEMBL interface. It's been a while since we first (and last) posted about it so we thought it deserved an update.

We've just released a new version (v0.2.0) and the highlights since we first talked about it are:
  • CPU intensive functions moved from Cython to C++ with Pybind11. Now also using libpopcnt
  • Improved speed, specially when dealing with some edge cases
  • Conda builds avaiable for Windows, Mac and Linux. There is no Conda for ARM but it also compiles and works in a Raspberry Pi! (and probably will do with the new ARM Macs as well)
  • Tversky search with a and b parameters (it previously had the 'substructure' feature with a and b respectively fixed to 1 and 0)
  • Distance matrix calculation of the whole set feature is now available
  • Zenodo DOI also available: 10.5281/zenodo.3902922
From a user point of view, the most interesting new feature is probably the distance matrix calculation. After the fingerprints file is generated, it is very easy to compute it:

from FPSim2 import FPSim2Engine

fp_filename = 'chembl_27.h5'
fpe = FPSim2Engine(fp_filename)
csr_matrix = fpe.symmetric_distance_matrix(0.7, n_workers=1)

To give an idea of the scale and the timescale of the problem, we've calculated the matrix on chembl_27 (1941405 compounds) using 2048 bit, radius 2 Morgan fingerprints. 1941405 * 1941405 similarities is a lot of them! 3.7 trillion, exactly.

Fortunately, the similarity matrix is symmetric (upper triangular and lower triangular matrices contain the same data) so we only needed to calculate (1941405 * 1941405 - 1941405) / 2 similarities. Still... this is 1.88 trillion similarities.

The role of the threshold is very important since it will help to skip a lot of calculations and save even more system memory. We can get the exact number of similarities that this calculation will need to do with a few lines of code:

from import get_bounds_range
from tqdm import tqdm

sims = 0
for idx, query in tqdm(enumerate(fpe.fps), total=fpe.fps.shape[0]):
    start, end = get_bounds_range(query, 0.7, 0, 0, fpe.popcnt_bins, "tanimoto")
    next_idx = idx + 1
    start = start if start > next_idx else next_idx
    s = end - start
    sims += s

1.2 trillion! The threshold saved 1/3 of the calculations and an insane amount of memory. But how much RAM did it save?


3.7 trillion vs 9.2 million results. Each result is made of 2 32bit integers and 1 32bit float. 44TB vs 110MB. Yes, terabytes...

The calculation took 12.5h in a modern laptop using a single core and 3.5h using 4.

The output is a SciPy CSR sparse matrix that can be used in some scikit-learn and scikit-learn-extra algorithms.

The order of the compounds is the same one than in the fps file (remember that compounds get sorted by number of fingerprint features). To get the fps ids:

ids = fpe.fps[:, 0]

Distance matrix can be easily transformed into a similarity matrix. = 1 -
# 0's in the diagonal of the matrix are implicit so they are not affected by the instruction above

and also into a dense matrix as some algorithms, like MDS, require them:

# classic MDS doesn't work with missing values, so it's better to only use it with threshold 0.0
# in case you still want to run MDS on missing values matrices, this example uses the SMACOF algorithm which is known for being able to deal with missing data. Use it at your own risk!

from sklearn.manifold import MDS

dense_matrix = csr_matrix.todense()
# with metric=False it uses the SMACOF algorithm
mds = MDS(dissimilarity="precomputed", metric=False)
pos = mds.fit_transform(dense_matrix)

Bear in mind that, as shown before, to genenrate dense matrices from big datasets can be dangerous!


Popular posts from this blog

UniChem 2.0

UniChem new beta interface and web services We are excited to announce that our UniChem beta site will become the default one on the 11th of May. The new system will allow us to better maintain UniChem and to bring new functionality in a more sustainable way. The current interface and web services will still be reachable for a period of time at . In addition to it, the most popular legacy REST endpoints will also remain implemented in the new web services: Some downtime is expected during the swap.  What's new? UniChem’s current API and web application is implemented with a framework version that’s not maintained and the cost of updating it surpasses the cost of rebuilding it. In order to improve stability, security, and support the implementation and fast delivery of new features, we have decided to revamp our user-facing systems using the latest version of widely used and maintained frameworks, i

A python client for accessing ChEMBL web services

Motivation The CheMBL Web Services provide simple reliable programmatic access to the data stored in ChEMBL database. RESTful API approaches are quite easy to master in most languages but still require writing a few lines of code. Additionally, it can be a challenging task to write a nontrivial application using REST without any examples. These factors were the motivation for us to write a small client library for accessing web services from Python. Why Python? We choose this language because Python has become extremely popular (and still growing in use) in scientific applications; there are several Open Source chemical toolkits available in this language, and so the wealth of ChEMBL resources and functionality of those toolkits can be easily combined. Moreover, Python is a very web-friendly language and we wanted to show how easy complex resource acquisition can be expressed in Python. Reinventing the wheel? There are already some libraries providing access to ChEMBL d

LSH-based similarity search in MongoDB is faster than postgres cartridge.

TL;DR: In his excellent blog post , Matt Swain described the implementation of compound similarity searches in MongoDB . Unfortunately, Matt's approach had suboptimal ( polynomial ) time complexity with respect to decreasing similarity thresholds, which renders unsuitable for production environments. In this article, we improve on the method by enhancing it with Locality Sensitive Hashing algorithm, which significantly reduces query time and outperforms RDKit PostgreSQL cartridge . myChEMBL 21 - NoSQL edition    Given that NoSQL technologies applied to computational chemistry and cheminformatics are gaining traction and popularity, we decided to include a taster in future myChEMBL releases. Two especially appealing technologies are Neo4j and MongoDB . The former is a graph database and the latter is a BSON document storage. We would like to provide IPython notebook -based tutorials explaining how to use this software to deal with common cheminformatics p

ChEMBL 30 released

  We are pleased to announce the release of ChEMBL 30. This version of the database, prepared on 22/02/2022 contains: 2,786,911 compound records 2,157,379 compounds (of which 2,136,187 have mol files) 19,286,751 activities 1,458,215 assays 14,855 targets 84,092 documents Data can be downloaded from the ChEMBL FTP site: Please see ChEMBL_30 release notes for full details of all changes in this release: New Deposited Datasets EUbOPEN Chemogenomic Library (src_id = 55, ChEMBL Document ID CHEMBL4689842):   The EUbOPEN consortium is an Innovative Medicines Initiative (IMI) funded project to enable and unlock biology in the open. The aims of the project are to assemble an open access chemogenomic library comprising about 5,000 well annotated compounds covering roughly 1,000 different proteins, to synthesize at least

Multi-task neural network on ChEMBL with PyTorch 1.0 and RDKit

  Update: KNIME protocol with the model available thanks to Greg Landrum. Update: New code to train the model and ONNX exported trained models available in github . The use and application of multi-task neural networks is growing rapidly in cheminformatics and drug discovery. Examples can be found in the following publications: - Deep Learning as an Opportunity in VirtualScreening - Massively Multitask Networks for Drug Discovery - Beyond the hype: deep neural networks outperform established methods using a ChEMBL bioactivity benchmark set But what is a multi-task neural network? In short, it's a kind of neural network architecture that can optimise multiple classification/regression problems at the same time while taking advantage of their shared description. This blogpost gives a great overview of their architecture. All networks in references above implement the hard parameter sharing approach. So, having a set of activities relating targets and molecules we can tra