John Irwin at UCSF, I and others working in the area of drug discovery have had several discussions and email exchanges on the topic of the performance and comparison of different virtual screening and docking methods on different targets and problems.
The eCheminfo network supports community of practice activities, i.e., it is intended to support the activities of a group of people who bond together to share knowledge on good, better, and best practices, to learn skills from each other, share experiences, and engage in a process of collective learning. It potentially then could be a neutral environment for supporting coordination on practice activities, such as the difficult area of comparative study in screening and docking.
We thought it would be useful to summarise some ideas on supporting greater collaboration on such work and to invite comments and discussion which we have done below.
Please contribute to the discussion and add your comments on the Cheminfostream Blog or John’s Docking.org blog. (We can also be reached via email at barry.hardy [at] douglasconnect.com and jji [at] cgl.ucsf.edu.) We look forward to your input.
Barry Hardy
Could we take a Community Approach to Comparing Virtual Screening Methods?
After twenty years of undeniable progress, molecular docking seems to have plateaued. A recent paper by Tirado-Rives and Jorgensen [1] dashes some of the few hopes we had left by showing that conformational energetics alone make it impossible to rank order diverse compounds in high throughput virtual screening. In a Perspective in the same issue [2], Leach, Shoichet and Pieshoff summarize the stagnating state of the art that is docking, and suggest a pragmatic way forward, through measurement and benchmarking. Again in the same issue, a laborious evaluation of 10 docking programs, using 37 scoring functions was applied to seven protein types for three tasks: binding mode prediction, virtual screening for lead identification, and rank-ordering by affinity for lead optimization [3]. Among some encouraging results and upbeat analysis, the paper makes a number of worrying observations, including that "high fidelity in the reproduction of observed binding poses did not automatically impart success in virtual screening". Moreover, for eight diverse systems, "no statistically significant relationship existed between docking scores and ligand affinity."
The physics of protein-ligand binding is clearly both important, and challenging. The NIH sponsored workshop described by Leach, Shoichet and Pieshoff [4] called for more high quality data to be made available for benchmarking, and, "well developed testing sets to be evaluated with all available technology, without barriers, if we are to see forward rather than lateral growth in the field."
Most efforts to compare docking methods in "apples-to-apples" comparisons have been plagued by one methodological weakness or another. For example, a common criticism is that the "experts" running the program are more familiar with one program than another. Criticisms of unfair bias due to past or ongoing association with a particular software group are frequent, particularly from the developers whose software performed worst. Numerous criticisms are also levelled at how success is judged, how the test sets are compiled, in fact, nearly everything about docking comparison studies can be criticised.
In the spirit of collaboration, and in an effort to move the field forward as advocated by NIGMS, we are suggesting here an "open source" initiative to compare docking methodologies (Our use of “open source” here is to the methods used to carry out comparisions of methods, not whether the source code used is “open source”). We propose that a form of peer review, as hosted on a wiki and supplemented by workshop activity or virtual conference-based discussion, be applied at all stages of a fair "competition", including the design of the experiment, collection of the data, running of programs, and the analysis of the results. The goal is not to show up one program or another as a winner or loser, but to honestly and fairly compare methods, allowing all reasonable criticisms to be raised during the process, so that the entire field can move forward.
The UCSF group are now offering a dataset which they recently compiled from the literature, in which they have attempted to design a database of actives and challenging decoys for 40 diverse targets [5]. They are also actively soliciting experimental test data from pharma. They know it is a challenge to get this data released, even for projects that are no longer active, but they are asking for it nonetheless, for the benefit of the field.
In the upcoming eCheminfo Community of Practice meeting in Bryn Mawr we have scheduled a forum (16.00 Tuesday 17th October) to discuss whether such an "open source" project to benchmark docking programs is of interest, and if so, how to best move forward. We think this is an auspicious time for such a project, and we hope you (and your company or organization) do too. The world has benefited enormously from other "open source" projects, such as Linux, MySQL, wikipedia, and so on. We think this is docking's time. What do you think?
As certainly not everyone interested in this topic can be present at the meeting in Bryn Mawr, and time there is limited, it would be good to have some exchange of ideas virtually in our run up to the meeting and beyond.
Barry Hardy (eCheminfo Community of Practice) and John Irwin (UCSF, Docking.org)
References
[1] Tirado-Rives & Jorgensen, Contribution of Conformer Focusing to the Uncertainty in Predicting Free Energies for Protein-Ligand Binding, J. Med. Chem, 2006, 59,5880-5884.
[2] Leach, Shoichet, Pieshoff, Prediction of Protein-Ligand Interactions. Docking and Scoring: Successes and Gaps., J Med Chem, 2006, 49, 5851-5855.
[3] Warren et al, A Critical Assessment of Docking Programs and Scoring Functions, J Med Chem, 2006, 49, 5912-5931.
[4] http://www.nigms.nih.gov/News/Reports/DockingMeeting022406.htm
[5] Huang, Shoichet, Irwin, Benchmarking Sets for Molecular Docking, J. Med. Chem, 2006, in press.
InnovationWell
eCheminfo
cheminformatics
chemoinformatics
bioinformatics
Computational Chemistry
Virtual Screening
Docking
Molecular Modelling
Molecular Modeling
pharmaceutical
pharma
meeting workshop conference management Bryn Mawr
Philadelphia
Critical Path
toxicology
personalised medicine
Life Sciences Pharma Drug DiscoveryResearch and Development Drug Development Healthcare Innovation Knowledge Management events
Recent Comments