Skip to main content

FPSim2 v0.2.0


One Woman Has To Know: Is This A Doodle Or Fried Chicken?

FPSim2 is the fast Python3 similarity search tool we are currently using in the ChEMBL interface. It's been a while since we first (and last) posted about it so we thought it deserved an update.

We've just released a new version (v0.2.0) and the highlights since we first talked about it are:
  • CPU intensive functions moved from Cython to C++ with Pybind11. Now also using libpopcnt
  • Improved speed, specially when dealing with some edge cases
  • Conda builds avaiable for Windows, Mac and Linux. There is no Conda for ARM but it also compiles and works in a Raspberry Pi! (and probably will do with the new ARM Macs as well)
  • Tversky search with a and b parameters (it previously had the 'substructure' feature with a and b respectively fixed to 1 and 0)
  • Distance matrix calculation of the whole set feature is now available
  • Zenodo DOI also available: 10.5281/zenodo.3902922
From a user point of view, the most interesting new feature is probably the distance matrix calculation. After the fingerprints file is generated, it is very easy to compute it:

from FPSim2 import FPSim2Engine

fp_filename = 'chembl_27.h5'
fpe = FPSim2Engine(fp_filename)
csr_matrix = fpe.symmetric_distance_matrix(0.7, n_workers=1)

To give an idea of the scale and the timescale of the problem, we've calculated the matrix on chembl_27 (1941405 compounds) using 2048 bit, radius 2 Morgan fingerprints. 1941405 * 1941405 similarities is a lot of them! 3.7 trillion, exactly.

Fortunately, the similarity matrix is symmetric (upper triangular and lower triangular matrices contain the same data) so we only needed to calculate (1941405 * 1941405 - 1941405) / 2 similarities. Still... this is 1.88 trillion similarities.

The role of the threshold is very important since it will help to skip a lot of calculations and save even more system memory. We can get the exact number of similarities that this calculation will need to do with a few lines of code:

from FPSim2.io.chem import get_bounds_range
from tqdm import tqdm

sims = 0
for idx, query in tqdm(enumerate(fpe.fps), total=fpe.fps.shape[0]):
    start, end = get_bounds_range(query, 0.7, 0, 0, fpe.popcnt_bins, "tanimoto")
    next_idx = idx + 1
    start = start if start > next_idx else next_idx
    s = end - start
    sims += s
print(sims)
1218544601003

1.2 trillion! The threshold saved 1/3 of the calculations and an insane amount of memory. But how much RAM did it save?

print(csr_matrix.data.shape[0])
9223048

3.7 trillion vs 9.2 million results. Each result is made of 2 32bit integers and 1 32bit float. 44TB vs 110MB. Yes, terabytes...

The calculation took 12.5h in a modern laptop using a single core and 3.5h using 4.

The output is a SciPy CSR sparse matrix that can be used in some scikit-learn and scikit-learn-extra algorithms.

The order of the compounds is the same one than in the fps file (remember that compounds get sorted by number of fingerprint features). To get the fps ids:

ids = fpe.fps[:, 0]

Distance matrix can be easily transformed into a similarity matrix.

csr_matrix.data = 1 - csr_matrix.data
# 0's in the diagonal of the matrix are implicit so they are not affected by the instruction above
csr_matrix.setdiag(1)

and also into a dense matrix as some algorithms, like MDS, require them:

# classic MDS doesn't work with missing values, so it's better to only use it with threshold 0.0
# in case you still want to run MDS on missing values matrices, this example uses the SMACOF algorithm which is known for being able to deal with missing data. Use it at your own risk!

from sklearn.manifold import MDS

dense_matrix = csr_matrix.todense()
# with metric=False it uses the SMACOF algorithm
mds = MDS(dissimilarity="precomputed", metric=False)
pos = mds.fit_transform(dense_matrix)

Bear in mind that, as shown before, to genenrate dense matrices from big datasets can be dangerous!

Comments

Popular posts from this blog

SureChEMBL Available Now

Followers of the ChEMBL group's activities and this blog will be aware of our involvement in the migration of the previously commercially available SureChem chemistry patent system, to a new, free-for-all system, known as SureChEMBL. Today we are very pleased to announce that the migration process is complete and the SureChEMBL website is now online. SureChEMBL provides the research community with the ability to search the patent literature using Lucene-based keyword queries and, much more importantly, chemistry-based queries. If you are not familiar with SureChEMBL, we recommend you review the content of these earlier blogposts here and here . SureChEMBL is a live system, which is continuously extracting chemical entities from the patent literature. The time it takes for a new chemical in the patent literature to become searchable in the SureChEMBL system is 1-2 days (WO patents can sometimes take a bit longer due to an additional reprocessing step). At time of writi

New SureChEMBL announcement

(Generated with DALL-E 3 ∙ 30 October 2023 at 1:48 pm) We have some very exciting news to report: the new SureChEMBL is now available! Hooray! What is SureChEMBL, you may ask. Good question! In our portfolio of chemical biology services, alongside our established database of bioactivity data for drug-like molecules ChEMBL , our dictionary of annotated small molecule entities ChEBI , and our compound cross-referencing system UniChem , we also deliver a database of annotated patents! Almost 10 years ago , EMBL-EBI acquired the SureChem system of chemically annotated patents and made this freely accessible in the public domain as SureChEMBL. Since then, our team has continued to maintain and deliver SureChEMBL. However, this has become increasingly challenging due to the complexities of the underlying codebase. We were awarded a Wellcome Trust grant in 2021 to completely overhaul SureChEMBL, with a new UI, backend infrastructure, and new f

ChEMBL & SureChEMBL anniversary symposium

  In 2024 we celebrate the 15th anniversary of the first public release of the ChEMBL database as well as the 10th anniversary of SureChEMBL. To recognise this important landmark we are organising a two-day symposium to celebrate the work achieved by ChEMBL and SureChEMBL, and look forward to its future.   Save the date for the ChEMBL 15 Year Symposium October 1-2, 2024     Day one will consist of four workshops, a basic ChEMBL drug design workshop; an advanced ChEMBL workshop (EUbOPEN community workshop); a ChEMBL data deposition workshop; and a SureChEMBL workshop. Day two will consist of a series of talks from invited speakers, a few poster flash talks, a local nature walk, as well as celebratory cake. During the breaks, the poster session will be a great opportunity to catch up with other users and collaborators of the ChEMBL resources and chat to colleagues, co-workers and others to find out more about how the database is being used. Lunch and refreshments will be pro

ChEMBL 34 is out!

We are delighted to announce the release of ChEMBL 34, which includes a full update to drug and clinical candidate drug data. This version of the database, prepared on 28/03/2024 contains:         2,431,025 compounds (of which 2,409,270 have mol files)         3,106,257 compound records (non-unique compounds)         20,772,701 activities         1,644,390 assays         15,598 targets         89,892 documents Data can be downloaded from the ChEMBL FTP site:  https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_34/ Please see ChEMBL_34 release notes for full details of all changes in this release:  https://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_34/chembl_34_release_notes.txt New Data Sources European Medicines Agency (src_id = 66): European Medicines Agency's data correspond to EMA drugs prior to 20 January 2023 (excluding vaccines). 71 out of the 882 newly added EMA drugs are only authorised by EMA, rather than from other regulatory bodies e.g.

RDKit, C++ and Jupyter Notebook

Fancy playing with RDKit C++ API without needing to set up a C++ project and compile it? But wait... isn't C++ a compiled programming language? How this can be even possible? Thanks to Cling (CERN's C++ interpreter) and xeus-cling jupyter kernel is possible to use C++ as an intepreted language inside a jupyter notebook! We prepared a simple notebook showing few examples of RDKit functionalities and a docker image in case you want to run it. With the single requirement of docker being installed in your computer you'll be able to easily run the examples following the three steps below: docker pull eloyfelix/rdkit_jupyter_cling docker run -d -p 9999:9999 eloyfelix/rdkit_jupyter_cling open  http://localhost:9999/notebooks/rdkit_cling.ipynb  in a browser