Skip to main content

FPSim2 v0.2.0


One Woman Has To Know: Is This A Doodle Or Fried Chicken?

FPSim2 is the fast Python3 similarity search tool we are currently using in the ChEMBL interface. It's been a while since we first (and last) posted about it so we thought it deserved an update.

We've just released a new version (v0.2.0) and the highlights since we first talked about it are:
  • CPU intensive functions moved from Cython to C++ with Pybind11. Now also using libpopcnt
  • Improved speed, specially when dealing with some edge cases
  • Conda builds avaiable for Windows, Mac and Linux. There is no Conda for ARM but it also compiles and works in a Raspberry Pi! (and probably will do with the new ARM Macs as well)
  • Tversky search with a and b parameters (it previously had the 'substructure' feature with a and b respectively fixed to 1 and 0)
  • Distance matrix calculation of the whole set feature is now available
  • Zenodo DOI also available: 10.5281/zenodo.3902922
From a user point of view, the most interesting new feature is probably the distance matrix calculation. After the fingerprints file is generated, it is very easy to compute it:

from FPSim2 import FPSim2Engine

fp_filename = 'chembl_27.h5'
fpe = FPSim2Engine(fp_filename)
csr_matrix = fpe.symmetric_distance_matrix(0.7, n_workers=1)

To give an idea of the scale and the timescale of the problem, we've calculated the matrix on chembl_27 (1941405 compounds) using 2048 bit, radius 2 Morgan fingerprints. 1941405 * 1941405 similarities is a lot of them! 3.7 trillion, exactly.

Fortunately, the similarity matrix is symmetric (upper triangular and lower triangular matrices contain the same data) so we only needed to calculate (1941405 * 1941405 - 1941405) / 2 similarities. Still... this is 1.88 trillion similarities.

The role of the threshold is very important since it will help to skip a lot of calculations and save even more system memory. We can get the exact number of similarities that this calculation will need to do with a few lines of code:

from FPSim2.io.chem import get_bounds_range
from tqdm import tqdm

sims = 0
for idx, query in tqdm(enumerate(fpe.fps), total=fpe.fps.shape[0]):
    start, end = get_bounds_range(query, 0.7, 0, 0, fpe.popcnt_bins, "tanimoto")
    next_idx = idx + 1
    start = start if start > next_idx else next_idx
    s = end - start
    sims += s
print(sims)
1218544601003

1.2 trillion! The threshold saved 1/3 of the calculations and an insane amount of memory. But how much RAM did it save?

print(csr_matrix.data.shape[0])
9223048

3.7 trillion vs 9.2 million results. Each result is made of 2 32bit integers and 1 32bit float. 44TB vs 110MB. Yes, terabytes...

The calculation took 12.5h in a modern laptop using a single core and 3.5h using 4.

The output is a SciPy CSR sparse matrix that can be used in some scikit-learn and scikit-learn-extra algorithms.

The order of the compounds is the same one than in the fps file (remember that compounds get sorted by number of fingerprint features). To get the fps ids:

ids = fpe.fps[:, 0]

Distance matrix can be easily transformed into a similarity matrix.

csr_matrix.data = 1 - csr_matrix.data
# 0's in the diagonal of the matrix are implicit so they are not affected by the instruction above
csr_matrix.setdiag(1)

and also into a dense matrix as some algorithms, like MDS, require them:

# classic MDS doesn't work with missing values, so it's better to only use it with threshold 0.0
# in case you still want to run MDS on missing values matrices, this example uses the SMACOF algorithm which is known for being able to deal with missing data. Use it at your own risk!

from sklearn.manifold import MDS

dense_matrix = csr_matrix.todense()
# with metric=False it uses the SMACOF algorithm
mds = MDS(dissimilarity="precomputed", metric=False)
pos = mds.fit_transform(dense_matrix)

Bear in mind that, as shown before, to genenrate dense matrices from big datasets can be dangerous!

Comments

Popular posts from this blog

RDKit, C++ and Jupyter Notebook

Fancy playing with RDKit C++ API without needing to set up a C++ project and compile it? But wait... isn't C++ a compiled programming language? How this can be even possible?

Thanks to Cling (CERN's C++ interpreter) and xeus-cling jupyter kernel is possible to use C++ as an intepreted language inside a jupyter notebook!

We prepared a simple notebook showing few examples of RDKit functionalities and a docker image in case you want to run it.

With the single requirement of docker being installed in your computer you'll be able to easily run the examples following the three steps below:
docker pull eloyfelix/rdkit_jupyter_clingdocker run -d -p 9999:9999 eloyfelix/rdkit_jupyter_clingopen http://localhost:9999/notebooks/rdkit_cling.ipynb in a browser


ChEMBL 25 and new web interface released

We are pleased to announce the release of ChEMBL 25 and our new web interface. This version of the database, prepared on 10/12/2018 contains:

2,335,417 compound records1,879,206 compounds (of which 1,870,461 have mol files)15,504,603 activities1,125,387 assays12,482 targets72,271 documents

Data can be downloaded from the ChEMBL ftp site: ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_25

Please see ChEMBL_25 release notes for full details of all changes in this release: ftp://ftp.ebi.ac.uk/pub/databases/chembl/ChEMBLdb/releases/chembl_25/chembl_25_release_notes.txt


DATA CHANGES SINCE THE LAST RELEASE

# Deposited Data Sets:

Kuster Lab Chemical Proteomics Drug Profiling (src_id = 48, Document ChEMBL_ID = CHEMBL3991601):
Data have been included from the publication: The target landscape of clinical kinase drugs. Klaeger S, Heinzlmeir S and Wilhelm M et al (2017), Science, 358-6367 (https://doi.org/10.1126/science.aan4368)

# In Vivo Assay Classification:

A classification…

FPSim2, a simple Python3 molecular similarity tool

FPSim2 is a new tool for fast similarity search on big compound datasets (>100 million) being developed at ChEMBL. We started developing it as we needed a Python3 library able to run either in memory or out-of-core fast similarity searches on such dataset sizes.

It's written in Python/Cython and features:
A fast population count algorithm (builtin-popcnt-unrolled) from https://github.com/WojciechMula/sse-popcount using SIMD instructions.Bounds for sub-linear speed-ups from 10.1021/ci600358fA compressed file format with optimised read speed based in PyTables and BLOSCUse of multiple cores in a single search In memory and on disk search modesSimple and easy to use
Source code is available on github and Conda packages are also available for either mac or linux. To install it type:

conda install rdkit -c rdkitconda install fpsim2 -c efelix
Try it with docker (much better performance than binder):

    docker pull eloyfelix/fpsim2    docker run -p 9999:9999 eloyfelix/fpsim2    open htt…

2019 and ChEMBL – News, jobs and birthdays

Happy New Year from the ChEMBL Group to all our users and collaborators. 
Firstly, do you want a new challenge in 2019?  If so, we have a position for a bioinformatician in the ChEMBL Team to develop pipelines for identifying links between therapeutic targets, drugs and diseases.  You will be based in the ChEMBL team but also work in collaboration with the exciting Open Targets initiative.  More details can be found here(closing date 24thJanuary). 
In case you missed it, we published a paper at the end of last on the latest developments of the ChEMBL database “ChEMBL: towards direct deposition of bioassay data”. You can read it here.  Highlights include bioactivity data from patents, human pharmacokinetic data from prescribing information, deposited data from neglected disease screening and data from the IMI funded K4DD project.  We have also added a lot of new annotations on the therapeutic targets and indications for clinical candidates and marketed drugs to ChEMBL.  Importantly we ha…

ChEMBL_27 SARS-CoV-2 release

The COVID-19 pandemic has resulted in an unprecedented effort across the global scientific community. Drug discovery groups are contributing in several ways, including the screening of compounds to identify those with potential anti-SARS-CoV-2 activity. When the compounds being assayed are marketed drugs or compounds in clinical development then this may identify potential repurposing opportunities (though there are many other factors to consider including safety and PK/PD considerations; see for example https://www.medrxiv.org/content/10.1101/2020.04.16.20068379v1.full.pdf+html). The results from such compound screening can also help inform and drive our understanding of the complex interplay between virus and host at different stages of infection.
Several large-scale drug screening studies have now been described and made available as pre-prints or as peer-reviewed publications. The ChEMBL team has been following these developments with significant interest, and as a contribution t…