Skip to main content

Costs of Assays



I'm giving some talks over the summer, and am getting bored with some of the stuff I have, so I'm thinking of some new stuff to put in to add a bit of variety and interest. I'm getting interested in thinking about assay level attrition, and trying to put more of a taxonomy and inter-relationship mapping between assays used in drug discovery. As part of this, there is a cost component for each type of assay, going from very cheap to really really expensive. Here's a little picture from the presentation I've put together - I used educated guesses for the costs, so please, please critique them !


So, what do people think of the guesstimates of costs per compound per assay point on the picture above. I know it is really variable, there are startup costs to set something up, etc, etc. But what do you think about the orders of magnitude, are they about right? One of the key features of the numbers I've put there, are that there are big transitions at the switch between in silico and in vitro, and then on entering clinical trials.

The picture at the top of this post (about unicorns) is from the very funny http://www.depressedcopywriter.com/.

Comments

Bin said…
Hi, John, this picture is very interesting. Do you know how they got this data? It would be great to have another one illustrating the time line of each assay.
Lo Sauer said…
I applaud you for the daring attempt, but it is a difficult subject and biased (except for the unicorns which of course do exist ;) ). 'In silico' too requires scientist writing the software in the first place, and the models need to be verified and improved in co-existence with empirical experiments - thus raising costs.

One could easily envision 'selling' in-silico results based on obsolete software-models (I've seen such papers), yet performing an kinase assay past its expiration date is often unthinkable.

The cost of clinical trials are incredibly varied depending on the type of drug and drug target in question, whereas those of in-silico are typically not.

In-silico models are of course primarily optmized to computing only what is needed, and constrained by the underlying model, whereas empirical data generation is limited by other constraints, and often much more data is generated than ever published (especially when we are talking about corporate science)
jpo said…
Yes, of course, each problem is different, and the costs can be very low, or very high. Across each level of the assay hierarchy there will be cheaper assays and more expensive ones, but I made an estimate.

The costs of development are not factored in to my numbers either, since it is difficult to know when to stop..... A scientist writes the software, and they have a salary that pays them during this time, but do you count the cost of their education, etc.

It's a difficult problem!
Dear John,
I am interesting in the source of this prices. Does any reference exist supporting them?
Thank's!
jpo said…
Hi Vladmir.

The costs are just my personal estimates for this. So they are just that estimates. There could be some better ways of getting estimates - for example going to a CRO and asking for quotes for a series of defined and typical assays. The issue there is that there are many factors that would complicate things - they would have a profit margin to include, and also want to recoup cost of capital, etc. Secondly, they would not be interested in running a single biochemical assay for a few dollars, and then there would be a 'volume discount' to handle.

Another interesting number, alongside timeline as suggest by Bin, would be number of assays run per year, my guess it would be many billions for the virtual screening end of the spectrum through to maybe a few thousand at the clinical trials end.

Popular posts from this blog

LSH-based similarity search in MongoDB is faster than postgres cartridge.

TL;DR: In his excellent blog post , Matt Swain described the implementation of compound similarity searches in MongoDB . Unfortunately, Matt's approach had suboptimal ( polynomial ) time complexity with respect to decreasing similarity thresholds, which renders unsuitable for production environments. In this article, we improve on the method by enhancing it with Locality Sensitive Hashing algorithm, which significantly reduces query time and outperforms RDKit PostgreSQL cartridge . myChEMBL 21 - NoSQL edition    Given that NoSQL technologies applied to computational chemistry and cheminformatics are gaining traction and popularity, we decided to include a taster in future myChEMBL releases. Two especially appealing technologies are Neo4j and MongoDB . The former is a graph database and the latter is a BSON document storage. We would like to provide IPython notebook -based tutorials explaining how to use this software to deal with common cheminformatics p

A python client for accessing ChEMBL web services

Motivation The CheMBL Web Services provide simple reliable programmatic access to the data stored in ChEMBL database. RESTful API approaches are quite easy to master in most languages but still require writing a few lines of code. Additionally, it can be a challenging task to write a nontrivial application using REST without any examples. These factors were the motivation for us to write a small client library for accessing web services from Python. Why Python? We choose this language because Python has become extremely popular (and still growing in use) in scientific applications; there are several Open Source chemical toolkits available in this language, and so the wealth of ChEMBL resources and functionality of those toolkits can be easily combined. Moreover, Python is a very web-friendly language and we wanted to show how easy complex resource acquisition can be expressed in Python. Reinventing the wheel? There are already some libraries providing access to ChEMBL d

Natural Product-likeness in ChEMBL

Recently, we included a Natural Product-likeness score for chemical compounds stored in ChEMBL. We made use of an algorithm published by Peter Ertl, Silvio Roggo and Ansgar Schuffenhauer in 2008 .  Whereas the original version of this algorithm used a commercial data set of Natural Product molecules for training the algorithm (the CRC Dictionary of Natural Products) and an in-house library of synthetic molecules as a negative set, we used Greg Landrum's  RDKit implementation  which is based on  ~50,000 natural products collected from various open databases and ~1 million drug-like molecules from ZINC as a "non-Natural Product background". After including the new score into ChEMBL, we were interested to see whether the results look meaningful. We had a handful of simple questions: How is Natural Product-likeness distributed in ChEMBL and how does this compare to Natural Product-likeness for "real" NPs? Can we observe any difference in Natural Product-likeness for

What is Max Phase in ChEMBL?

ChEMBL contains information on drugs that have been approved for treatment of a specific disease / diagnosis (an indication) within a region of the world (e.g. FDA drugs are approved for use in the United States), and clinical candidate drugs that are being investigated for an indication during the clinical trials process.  The maximum phase of development for the compound across all indications is assigned a category called 'max_phase' (the value in brackets is used in the downloadable ChEMBL database in the 'molecule_dictionary' table): Approved (4): A marketed drug e.g. AMINOPHYLLINE ( CHEMBL1370561 ) is an FDA approved drug for treatment of asthma.  Phase 3 (3): A clinical candidate drug in Phase 3 Clinical Trials e.g. TEGOPRAZAN ( CHEMBL4297583 ) is under clinical investigation for treatment of peptic ulcer at Phase 3, and also liver disease at Phase 1.  Phase 2 (2): A clinical candidate drug in Phase 2 Clinical Trials e.g. NEVANIMIBE HYDROCHLORIDE ( CHEMBL542103 )

ChEMBL 32 is released!

  We are pleased to announce the release of ChEMBL 32! This release of ChEMBL comes with a complete update of drug and clinical candidate information, the addition on a Natural Product likeness score and a harmonization of Journal Name abbreviations according to NLM standards. This version of the database, prepared on 26/01/2023 contains: 2,354,965 compounds (of which 2,327,928 have mol files)             2,995,433 compound records (non-unique compounds) 20,038,828 activities 1,536,903 assays 15,139 targets 86,364 documents   Please see ChEMBL 32 release notes for full details of all changes in this release. Data can be downloaded from the ChEMBL FTP site . Please note that on demand Oracle 19c dumps will not be provided anymore after the ChEMBL 34 release. New Deposited Datasets CHEMBL5058649 - Data for DCP probe BAY 1816032 * CHEMBL5058643 - Data for DCP probe BI-2081 * CHEMBL5058646 - Data for DCP probe CCT369260 * CHEMBL5058644 - Data for DCP probe JNJ-39758979 * CHEMBL5058645 - D