Reversing Eroom’s Law: can computers dramtically impact the productivity of drug discovery?

Reversing Eroom’s Law: can computers dramtically impact the productivity of drug discovery?

Summary

Schrödinger, Inc, a company founded on the idea that computers can dramatically accelerate the discovery of better (more efficacious and less toxic) drugs, has realized this vision by making breakthroughs in scientific algorithms and by leveraging Moore’s law. The technology is at an inflection point such that the accuracy of biomolecular simulations is now consistently high enough to be used to truly drive preclinical drug discovery projects. This promises to transform the drug discovery industry, in the same way that computers have transformed nearly all other industries ranging from movie making to manufacturing of airplanes. The strategic inflection point in nanoscale simulation presents investors with an opportunity to ride a wave of rapid innovation and exceptional value creation.

 

Increasing Cost of Developing FDA Approved Drugs – Eroom’s Law: Causes and Possible Remedies

At various times over the past 30 years, computer technology has been advanced as a means of dramatically increasing the efficiency of developing new drugs. The cost of computing has declined by a factor of 2 roughly every 18-24 months, in accordance with Moore’s law, for more than 50 years. In contrast, and somewhat alarmingly, the increase in the average cost to develop a new, FDA approved drug has increased steadily over the past 15 years, and now stands at approximately $1 billion. This phenomenon, which has been given the name Eroom’s law (Moore’s law spelled backwards), threatens not only the viability of major pharmaceutical and biotechnology companies, but the ability of our society to continue the vast improvements made in health care in the 20th century at an affordable cost. The large increase in costs has limited the rate at which new life-saving medicines are produced.

A recent analysis of 10 years’ worth of small-molecule drug discovery projects from four large companies – Pfizer, AstraZeneca, Eli Lilly, and GlaxoSmithKline – reveals a key cause of failure: poor properties such as absorption, distribution, metabolism, excretion, and/or toxicity (collectively referred to as ADMET properties) of the drug candidates. Even at the preclinical stage (prior to human trials), only 14% of 400 compounds declared as “development candidates” successfully passed the toxicity animal testing required prior to compounds entering into human trials. In Phase 1 clinical trials, ADMET problems accounted for more than 50% of failure to progress to Phase 2. These attrition rates have many possible specific causes, but one thing is clear; despite major advances in chemical synthesis, it still costs a lot of money to synthesize a drug candidate (on average $5,000) – so only a few thousand compounds are typically made over the course of a project. And despite the availability of atomic resolution structures for many important drug targets, it is still extraordinarily difficult to design just the right molecule – so a few thousand compounds may not be enough to achieve success.

The industry has recognized for a long time that  technological breakthroughs are needed to substantively increase the success probabilities of small molecule drug discovery projects. Many such breakthroughs have been claimed over the past 30 years: structure-based drug design, high-throughput screening, combinatorial chemistry, systems biology, and genomics are some of the most prominent examples. While these techniques are currently used in the pharmaceutical and biotechnology industries, none has proven to be a panacea in struggling with the combination of higher safety and efficacy requirements for new drugs.

As mentioned above, computer technology has been advanced as a means of dramatically increasing the efficiency of developing new drugs. Computers are in fact routinely used by drug discovery teams to store and retrieve chemical and biological information, monitor the progress of projects, and enable graphical visualization of the structures of drug candidates and the proteins they target. However, these applications do not exploit the massive increases in computing power reflected in Moore’s law. The question is whether there is any computational approach in which large scale computing capability can be used inexpensively and efficaciously to select better drug candidates. As will become evident below, the answer is a resounding yes.

 

Using Computers to Search for the Perfect “Billion Dollar Molecule”

The focus here will be on small-molecule drugs (molecules with typically fewer than 100 atoms), which currently constitute ~75% of FDA approvals, but similar questions can be asked about biologics (e.g., antibody therapeutics) as well. Small-molecule drug discovery is a search for the most valuable substances known to man; those that can ameliorate or cure debilitating disease. At the dawn of the pharmaceutical era, Paul Ehrlich, who discovered one of the first effective drugs produced by chemical synthesis, envisioned the existence of a “magic bullet” – a molecule perfectly suited to its prescribed medical function. This concept still drives much of the $50B per year spent on pharmaceutical research.

How can such a molecule be identified? In a typical small-molecule drug discovery project, biological research first identifies a “target”; a protein in the body, or in a pathogen, that one would like to block (or in some cases activate) with the drug molecule. For example, current standard of care AIDS medication, which has transformed AIDS infection from a death sentence to a chronic condition, consists of a “cocktail” of three different molecules, each of which blocks or inhibits a protein made by the AIDS virus, which is necessary for it to reproduce itself. The targets for AIDS were identified roughly 30 years ago; it then took 10-15 years to design, develop, and test the drugs we use today in standard-of-care treatment.

Once a target is identified, the next step in the project is to design and then synthesize a molecule that one hopes will exhibit all the required properties of a good drug; such a molecule is referred to as a drug candidate. Drug discovery is incredibly difficult because the candidate has to pass many tests before clinical trials (where the drug is in most cases tested first in healthy volunteers and then in patients) are even allowed to start – and then has to succeed in the clinic. Firstly, it has to inhibit (or activate) the target, strongly enough to have the desired biological effect. But it cannot interfere with the function of any other proteins nearly as strongly, because that is one of the main causes of undesirable side effects. An oral drug (a drug taken by mouth) must diffuse from the stomach into the organs of the body where the inhibition is required, without getting digested along the way. Once arriving at the cell, the drug must cross the cell membrane (which is composed of lipids), an often difficult task. Finally, the drug has to be metabolized and excreted on just the right timescale; not too quickly (or the biological effect would be eliminated), but not too slowly either (or its effects might linger far past the desired treatment period).

Many current drugs are far from ideal, because they produce various types of side effects, some of which are quite unpleasant and on occasion fatal. The potential high impact use of computer models in drug discovery can now be readily inferred; what if the key properties of the candidate drug molecules could be calculated, at a cost far lower per compound than the $5000 required for experimental synthesis and testing? It has been estimated that there are 1060 possible drug candidates that could be synthesized. (To get a sense of how big this number is, there are 1021 stars in the universe and there are 1046 water molecules in the earth’s oceans.)  Somewhere in that enormous list of molecules is almost certainly one with better properties (likely much better properties) than those created using purely experimental approaches. The challenge is to predict, using computers and accurate scientific algorithms, the key properties a drug candidate must have.

In applying computational methods to drug discovery, the initial goal would be to predict potency, i.e., how tightly a proposed drug candidate binds to the specified protein target; very tight binding is necessary in order to inhibit (or activate) the target at low enough doses of the drug such that the function of other proteins in the body are not impacted. An exciting possibility would be to build a physical model of the drug molecule interacting with the protein target at an atomic level of detail, and then calculate the strength of the binding using the laws of physics. Physics-based simulations at the nanoscale – that is, simulations that incorporate an atomic level of detail of the physical system – have been used in pharmaceutical research for more than 30 years, but as a methodology for making quantitative prediction, the results were for many years disappointing. Until quite recently, the calculations lacked the accuracy and robustness needed to elucidate such differences, and thus engineer superior molecules.

Ten years ago, Schrödinger, Inc., set out to develop an accurate and reliable physics-based solution to the problem of drug binding simulation, building on years of earlier work in academic laboratories. Over the past several years, we have seen an inflection point in the technology; accuracy is now consistently high enough to effectively be used to guide real-world drug discovery projects. It is now possible to predict the binding affinity of a drug candidate, using a “computational assay”, for as little as $5 per molecule, as compared to the $5000 required to make and test the compound experimentally. Furthermore, this cost can be expected to diminish with time, following Moore’s law.

Physics-based simulation methods are already widely used in important engineering applications. No one would build a bridge or tunnel without appropriate calculations of the stresses and strains on the proposed structure. Computer simulations of airflow have replaced wind tunnel experiments in the design of airplanes, such as the Boeing 787 Dreamliner. The flow of electrons in a microprocessor is modeled by sophisticated simulation packages that provide approximate solutions to the Schrödinger equation for crystalline semiconductors.

The challenge of simulating a drug molecule binding to a protein target is considerably more difficult than any of the above problems. Of the three problems described above, only the microprocessor simulations involve nanoscale modeling, and this problem is simplified by the regular structure, and rigidity, of the silicon crystal lattice. In a drug binding simulation, hundreds of thousands of atoms have to move in a complex, irregular dance that must faithfully reproduce the microscopic interactions prescribed by the relevant laws of physics (e.g., the Schrödinger equation).

Physics based simulation is the bridge between the world of bits, in which Moore’s law holds sway, to the world of atoms and molecules, which has for the most part stubbornly resisted the kind of exponential improvement that gave us the Internet and the iPhone. It is the key to digitally enabled drug discovery and, ultimately, all sorts of new materials and processes.

 

Schrödinger, Inc.

When Schrödinger, Inc, was founded in 1990, the initial goals were modest: commercialize a new quantum chemistry program developed in Prof. Rich Friesner’s academic laboratory at Columbia University. Funded primarily by NIH Small Business Innovative Research Grants, this effort was a success; by 1998, the company had annual sales of  approximately $1 million from selling quantum chemistry tools that were 3-5 times faster than the current state of the art to pharmaceutical and biotechnology companies.

In 1998, classical (vs. quantum) mechanics technology (often referred to as molecular mechanics) was acquired from another Columbia University group, and David Shaw, a computer scientist who founded the D. E. Shaw group and currently serves as Chief Scientist of D. E. Shaw Research, made a significant investment in the company. A decision was made to focus on the development of the next generation of tools for drug discovery, and to create integrated solutions capable of quantitative accuracy. Even at the time, it was clear that the latter task was a long term project that would require patient capital, given the need for better functions for describing interatomic energy potentials, faster and more robust simulation algorithms, and considerably more computing power. Fortunately, Schrödinger had a strategic investor who believed that ultimately, digital simulation had the potential to revolutionize the drug discovery enterprise as it had many other industries. A number of years later, in 2010, Bill Gates became a major investor in Schrödinger; he shared a similar long term vision and commitment to the transformative nature of the scientific and technical opportunities. Schrödinger could not have successfully embarked on, or completed, the journey described here without the level of confidence, and financial backing, provided by Shaw and Gates.

By 2008, Schrödinger had become a major player in the molecular modeling space, with a number of best-in-class software solutions that were widely used by pharmaceutical modeling groups in drug discovery projects. However, it became apparent that to make further progress, Schrödinger was going to have to work on drug discovery projects itself.

 

Optimizing the Drug Discovery Process around Computation

In 2010, Schrödinger and Atlas Ventures, a leading life science venture capital firm, formed a joint venture to push the boundaries of computationally driven drug discovery. The resulting company, Nimbus Therapeutics, has provided an initial picture of what state of the art computation is capable of. Of the three Nimbus projects that have been operative for more than a year, one was partnered with Genentech; the second was sold to Gilead Sciences for $1.2 billion and yielded a compound that recently entered Phase 2 clinical trials; the third has effectively addressed a seemingly intractable problem in compound design, and is advancing toward the clinic.

When Nimbus was started in 2010, the average computational chemistry effort per drug discovery project in large pharmaceutical companies was on the order of 0.25 FTEs. This modest effort reflects the auxiliary role played by computation on project teams. Projects run in Nimbus have employed a significantly higher level of computational effort, as well as having accessibility to many more software licenses of key tools and a much higher level of processing power. Of equal importance, the operational protocols in a drug discovery project have been optimized to exploit the rapidly increasing capabilities of modeling methodology. An iterative feedback loop was established in which experience in projects drives improvements in the software, which in turn engenders further optimization of protocols.

The use of computation in advancing projects can be divided into four types of applications:

(1) Target analysis: Schrödinger has developed specialized protocols to assess the druggability of a protein target; these have been essential in improving the probability of success.

(2) Virtual screening: Schrödinger’s virtual screening tools are widely acknowledged as the best in the industry.  Virtual screening involves scanning a large collection of purchasable compounds (currently about 10 million) using fast approximate methods to identify initial hits that can then be optimized into a drug molecule in subsequent stages.

(3) Candidate design:  Once initial hits have been obtained, they have to be chemically modified to achieve all the required properties of a drug, including much tighter binding affinity to the protein target. In a Nimbus project, this process entails investigating hundreds of thousands of “design ideas” using a range of computational tools, and involves an intimate collaboration between Nimbus synthetic chemists and biologists, and the Schrödinger computational modelers. These collaborative interactions have been facilitated by a new enterprise software application developed by Schrödinger that enables design ideas to be exchanged, annotated, stored, and viewed by the entire team. Only candidates passing computational filters in the design process make it to the next stage.

(4) Computational assays:  When the precision of the calculation of an important quantity, such as the binding affinity of a drug candidate to the protein target, reaches a level that is comparable to the estimated error in experimental measurements, the calculation moves from the status of a tool (where the errors are larger in magnitude and more unpredictable) to that of an “assay” – a filter that can be relied on from a statistical point of view to reliably evaluate the property in question. Schrödinger has promoted an increasing number of computational protocols to assay status over the last several years. The emergence of accurate computational assays is a novel development that is a key catalyst for dramatically increased value creation in computationally driven drug discovery projects.

Over the past 6 years, Nimbus and Schrödinger have integrated these components into a highly effective platform that continues to evolve in its ability to drive projects forward.

 

The First Billion Dollar Molecule from the Nimbus/Schrödinger partnership

Acetyl-CoA carboxylase (ACC) is a critical enzyme involved in lipid metabolism. In animal models, blocking ACC forces cells to metabolize fat, thus leading to weight loss and reduction of fat in the liver. Blocking ACC also selectively kills specific types of cancer cells that are dependent upon an ACC-based metabolic pathway for survival. There are thus many disease indications that could in principle benefit from treatment with a molecule that inhibits ACC. Nimbus has chosen initially to focus on nonalcoholic steatohepatitis (NASH), which currently has no effective treatment, and can if untreated ultimately lead to more serious liver disease (cirrhosis, liver cancer). NASH at present affects 2-5% of Americans.

The Nimbus/Schrödinger collaboration succeeded in developing a highly promising ACC inhibitor, despite the fact that other companies have failed to make progress against this very difficult target. The first key decision was to target the inhibitor at an unusual allosteric binding site of the protein (i.e., a site that is not where a substrate of the enzyme would bind), as opposed to the normal site of enzyme activity. Schrödinger target analysis indicated that the normal site was undruggable, whereas the allosteric site was amenable to the right type of molecule. A virtual screen then located a viable lead compound, which was transformed via the design/optimization process into a development candidate in just 18 months, after synthesizing only a few hundred compounds (a typical project requires synthesis of several thousand compounds en route to a development candidate, and takes 3-5 years or longer). The compound successfully passed GLP toxicity testing in 2014, entered Phase 1 clinical trials in 2015, and entered Phase 2 clinical trials in late 2016.

The combination of safety profile, initial evidence of efficacy, first in class inhibition against a novel target, and potential to address a large, unmet medical need, engendered substantial interest in the compound from major pharmaceutical and biotechnology companies. Ultimately, a deal was concluded with Gilead Sciences, currently the leader in treating liver disease with its Harvoni medication for hepatitis C. The purchase price was $400M upfront and $800M contingent upon future successful clinical outcomes. A $200M milestone payment was recently made when the clinical compound entered Phase 2.  Further discussion of the ACC project, including economics for the investors, can be found in this blog written by Bruce Booth, the Atlas partner who co-founded Nimbus with Schrödinger:

Schrödinger is now engaged not only in follow on projects with Nimbus, but with a wide range of pharmaceutical, biotechnology, and venture capital partners. The success of the ACC program, coupled with a suite of accurate and robust computational assays validated over the past several years (see below), has generated enormous excitement in the industry.

 

Computational Assays Using Free Energy Perturbation Theory

In the early 1980’s, a specialized type of simulation, called free energy perturbation (FEP) was developed to improve the efficiency of rigorous binding affinity calculations. An FEP calculation starts with a reference molecule – a drug candidate whose binding affinity is already known from experiment. Since a drug discovery project is always about improving an existing candidate molecule, a suitable reference is available. During the simulation, the reference molecule is modified, by an “alchemical transformation”, into the new molecule of interest. In addition to binding affinity, FEP can also be used to assess other important properties, such as the solubility of a molecule in water.

By 2010, the conditions for developing an accurate and robust FEP methodology that would be useful across many drug discovery projects were in place. Computing power had reached the point where  efficient physics-based simulations, using classical mechanics,  could be performed, and more than 50,000 protein structures were available in the public repository of proteins structures, the Protein Data Bank (PDB), including a large number of interesting drug targets. Schrödinger identified the following key systematic improvements required to bring FEP to computational assay status, and launched a large-scale engineering project to address each of them:

(1) development of the first truly comprehensive and accurate molecular mechanics models covering medicinal chemistry space – such models are essential in representing proteins and drug molecules in a classical computer simulation;

(2) specialized techniques to make sure that during the course of the simulation, the system is able to visit all important drug candidate/protein receptor structures without requiring excessive computation time;

(3) development of a graphical user interface (GUI) enabling ease of use; prior to development of the GUI, it would take weeks to months to properly prepare a single calculation; the Schrödinger FEP user interface enables hundreds of calculations to be launched in a few hours;

(4) tools for accurate initial setup of the system, including placement of key water molecules and hydrogen atoms, and the initial positioning of the new drug candidate molecule; and

(5) efficient performance of the simulation code using graphical processing units (GPUs); for physics-based classical simulations, GPUs provide a factor of roughly 10x improvement in cost/performance as compared to traditional CPU-based architectures.

It has been demonstrated through large-scale retrospective and prospective testing on more than 100 protein targets and ~5,000 molecules that our implementation of FEP provides a computational assay that is nearly equivalent to performing the corresponding experimental assay. In approximately 50% of cases, the prediction is within the typical error of experimental measurements; in 90% of cases, the prediction is within a factor of 10 of the experimental measurement. This level of accuracy is high enough to have a very significant impact on driving drug discovery projects. There are now a very large number of examples from our collaborators where FEP was successfully used to advance a program that might otherwise have taken a much longer time to progress to the same point or even failed to produce a viable drug candidate.

 

Large Scale FEP  for Solving Challenging Design Problems

The impact of FEP can be dramatically increased by scaling up the number of calculations.  One situation where this is an essential strategy is when the target is a very difficult one on which to make progress. A recent project in this category is an ongoing collaboration with a top 5 pharmaceutical company.  The collaborators had been unable to make any progress in improving potency, starting from an interesting lead compound derived from experimental screening, despite having synthesized 73 compounds.

3500 compounds were run through the FEP computational assay.  Experimentally, it would have taken a minimum of several years, and on the order of $10M, to make and experimentally assay the same number of compounds. The cost of the FEP calculations was approximately $20,000.  Of the 3500 compounds, only 23 were predicted to improve the binding affinity of the lead compound. This is consistent with the difficulties experienced in a traditional experimental approach to the problem.  To date, the top three compounds selected by FEP have been synthesized and tested experimentally.  Two of the compounds have provided more than an order of magnitude increase in potency, while results for the third are still being analyzed.

 

The Future: Drug Discovery Enabled by Moore’s Law

The results described above demonstrate that drug discovery projects in which the structure of the protein target is known are now enabled on the Moore’s law curve. This means that in 10 years, the cost of FEP calculations is expected to fall by approximately a factor of 30 (assuming a 2x reduction in cost/performance every two years). Alongside the hardware improvements, we can expect both speed and accuracy increases from improvements in the software and algorithms. More properties will be amenable to a computational assay, and the existing assays will converge to uncertainties equivalent to those inherent in experimental assays. Fast approximate methods will also improve, enabling the preliminary screening of billions (or even trillions) of compounds to become more robust. All of this implies that the quality of development candidates (and hence the success rate of new drug candidates in the clinic) will increase, while project costs will at the same time decrease. The counterweight of highly challenging biology and a higher safety/efficacy bar for new drugs will still be present; but reversing Eroom’s law using accurate, robust nanoscale simulations appears to be likely.

Not every project is currently amenable to this technology due to lack of an available protein structure, but an increasing number are as more protein structures are determined, and experimental techniques improve. Exciting new methods based on cryogenic electron microscopy (cryo-EM)  are facilitating the determination of larger, more complex structures on an increasingly routine basis. This means that the number of projects that can benefit from atomistic simulation will grow every year.

The discussion above has focused on small-molecule drug discovery efforts, but the same techniques can be applied to optimization of more complex pharmaceutical agents, including macrocycles, antibodies, and other protein therapeutics. Encouraging preliminary data along these lines has already been produced.

More generally, personalized, or precision, medicine requires the design of molecules customized to the patient’s genetic profile; this in turn implies development of a much larger number of molecules, with fewer patients served by each molecule. A combination of high precision drugs, along with drastically lower development costs, will be needed to make this vision a reality, as a complement to “big data” genomics analysis. Nanoscale biomolecular simulation can deliver on both of these requirements, opening up large new market opportunities.

Schrödinger’s goal is to create an encompassing systematic approach exploiting the revolutionary implications of digitally enabled drug discovery. Such an approach involves deep integration of experimental methodology with computational assays, a wide range of auxiliary modeling technologies, and a collaborative enterprise platform for managing experimental and computational data,  as well as enabling rapid digital prototyping and testing of new designs prior to expensive experimental synthesis and testing. The success of the Nimbus ACC project, followed by the development of a widely effective FEP computational assay, represents the leading edge of a strategic inflection point, which Schrödinger is well poised to capitalize on going forward.

 

The Future: The World of Atoms Enabled by Moore’s Law

Nanoscale simulation can be applied to many materials and process design problems other than drug discovery. At present, these applications are somewhat behind drug discovery; however, success in drug discovery will accelerate the adoption of the technology in fields as diverse as electronic materials, polymers, catalytic chemistry, and energy conversion.

In a recent book, economist Robert Gordon makes the case that GDP growth over the past 50 years has dramatically slowed as compared to the previous 100 years due to lack of productivity growth in major segments of the economy.  While we are all dazzled by the changes in information technology, the world of bits is quantitatively limited in size as compared to the world of atoms.  In the end, the limiting factor in electric cars revolutionizing transportation is the capabilities of the battery, not self-driving software. And battery improvement has proceeded slowly over the years, despite massive capital investment.

Is there a combination of materials that would yield a battery that is 2x, 3x, or even 10x more capable and cost effective than the lithium ion units currently in use? No one knows for sure, but the fraction of materials that have been screened experimentally is miniscule compared to what is theoretically possible. A computational assay for battery performance is not feasible at present; but things can change very rapidly when dealing with exponential growth. When the world of atoms as a whole is enabled by Moore’s law, a new industrial revolution may once again reset the human ability to transform nature for the better.

 

Schrödinger as an Investment Opportunity

The strategic inflection point in nanoscale simulation presents investors with an opportunity to ride a wave of rapid innovation and exceptional value creation. Computationally driven projects can put superior molecules in the clinic at a fraction of the cost of an equivalent experimental effort. For this reason, Schrödinger is experiencing exponentially growing interest in joint ventures at increasingly attractive deal terms. Additional capital will enable Schrödinger to scale up its portfolio of risk sharing projects, in both number and in the magnitude of the equity stakes.  Risk sharing requires up front capital investment, but delivers highly leveraged rewards for success in the clinic. Projections of current opportunities indicate that a revenue stream in excess of $200M/year from drug discovery partnerships could be reached within 4 years, an amount that could be significantly increased with access to additional investment capital.

Software sales benefit from the success of risk sharing ventures by illustrating the transformative nature of the strategic inflection point, and by changing the market power equation in large customer relationships.  We expect software sales to grow at a minimum of 10-15% per year, and quite possibly at a significantly higher rate as FEP+ deployment is scaled up, and our enterprise platform gains traction. New investment capital will enable the relatively modest increase in sales, support, and infrastructure required to grow the software business. The steadily-increasing renewable revenue stream from this business is complementary to the more rapidly growing, but less predictable, risk sharing revenue stream.

Finally, continued investment in cutting edge science and technology development, scientific software engineering, and enterprise platform functionality are essential in order for Schrödinger to retain a leadership position in nanoscale simulation and modeling. The major fraction of returns is generally captured by the market leader during a strategic inflection point.  Our research pipeline indicates that breakthroughs similar to FEP are possible in the next 5 years across a wide range of compelling applications, including protein structure prediction, development of computational assays for antibodies and other biologics, application of deep learning to drug discovery, and design of new materials.   Schrödinger’s demonstrated ability to translate innovative basic research advances into robust, high performance software will ensure the continued success of drug discovery partnerships, loyalty and increased subscription of software customers, and penetration into new markets.

Richard Friesner, Ph.D