A toxicology ontology roadmap In this article we lay out the main justifications and needs for ontology development to enable the building of interoperable predictive toxicology systems and services for industry to design and test safer products. http://www.altex.ch/en/index.html?id=16&aid=2
We managed to get quite a bit of software installed at the #OpenTox workshop at SETAC, Africa on Tuesday on about 25 laptops or so belonging to the group. But we still have to get a couple of final machines to setup.
As now we can work on some problems, we decided at day's end we could continue the workshop again today, although that was not originally planned, but why not. Perhaps we can run all week, until we all get tired out :)!
Our main challenge and delay with the Virtual Appliance installation was the final installation patch Roman and Nina discovered was needed over the weekend, as this was required to fix the removal of ontology service files from the temp directory upon a virtual machine restart. At least we can fix that bug in a nicer way for next time.
However yesterday, the final installation step required going onto the net for every machine in turn, which took most of the afternoon! We did not have any internet for several hours, but we finally were able to send a group member Anyi with cash down to a provider office to charge an account she had, which we were able to use in the afternoon (Thanks Anyi for saving the day!). It was not so reliable or fast a connection but we eventually got through on it, but only one machine could be on at a time.
It will be helpful if we can figure out the shared file folders facility between the Virtual Appliance for linux and Windows - but did not have time to do that yet.
Also, we installed and ran Bioclipse live over the one internet connection at the end of the afternoon and had it go out and fetch OpenTox resource and model information, and live for an example. We did it (just about)! I pointed out the new login profile setup Egon has just recently setup, and perhaps we can get that done for the group today.
Philip Judson also joined us and we were able to get Derek and Meteor setup on machines too. Based on his experience, he gave a nice overview on some factors we have to reflect on in using such in silico programs and methods!
Eventually the bus came and took us home - tired but at least installed :)! We will try to continue today!!
While in Zululand in December working on conservation monitoring work in the bush, I learned about an interesting tree called the Tamboti Tree. The leaves of this tree were traditionally used to cure toothache. But it seemed like ingredients in this tree can cause severe nerve damage. For example, we were told that it was quite dangerous to use its wood in a fire, as inhaling the fumes could cause brain damage, if not death. Without such local knowledge, how would I have known that? I could easily have just assumed this wood was like any other wood and burnt it!
And so this provided the seed to me for what I will call The Tamboti Tree Use Case. In this case the individual moving about in the environment will be provided advice on such risks. Such advice could be provided from the future distributed semantic knowledge base we would have in predictive toxicology that we have been working to develop on OpenTox. The advice could be delivered to the individual's mobile device which would recognise this biological object and then query the knowledge base and return the risk warning. This could apply to many other situations in the bush e.g., if you encounter a lion "don't run" or indeed many contexts in society e.g., "don't buy this product, you are allergic to it" to shoppers in the supermarket.
So this Tamboti Tree Use Case can serve as an inspiration for us to work towards in our efforts to create this future semantic web of predictive toxicology knowledge and services.
What can we do now? Once back at my computer I searched on google, found information on the tree on wikipedia and even some sentences on its toxicity. However the active ingredient chemcial structure mentioned, the diterpene excoecarin, had no chemical information or structure linked.
I go back to google and search around and finally find a chemical string as a SMILES. I paste this into the new Bioclipse application, see the structure, and click play. Bioclipse then starts running local predictive models on the toxicity of the molecule which I can start examing. BUT it also goes out on the web and starts to bring in predictions and alerts for the distributed set of OpenTox services available. I can also edit the structure, click play again, and the models and predictions are recalculated.
You can see a Bioclipse-OpenTox movie of my activities on this "early version of the Tamboti Tree Use Case" at:
We still have lots of work to do, but you can see the directions for progress.
This promising interoperation between Bioclipse and OpenTox was achieved in Autumn 2010 by Ola Spjuth (blogging at http://bioclipse.blogspot.com/), Egon Willighagen, and OpenTox developers, and first demoed at the OpenTox-EBI industry forum workshop on ontology and interoperability at Hinxton (16, 17 November). It is I think a good early practical example of the value of ontology and interoperability and the applications it enables linked with the nascent semantic toxicology web, and has much promise for further development in the months and years ahead.
You can download the Bioclipse application that interoperates with OpenTox and try it out yourself using one of the following downloads for PC, Mac or linux:
Last year we formed the “Scientists Against Malaria” (SAM) collaboration to apply modern drug design and modelling techniques in combination with industry standard infrastructure and interdisciplinary science to help develop new treatments against Malaria. The group’s first project assembles a number of leading academic researchers together with smaller innovative companies who are collaborating to develop novel inhibitors active against the Plasmodium parasite.
This is our first step in creating a collaboration learning machine for our community to enable and accelerate knowledge flow to progress the scientific research needed to develop new treatments against neglected diseases, which include other parasitic and tropical diseases, and diseases such as ALS which devastate people's health and are currently without any available treatment solutions.
Our drug design project involve situations when a number of partners collaborate to jointly solve molecular design problems as an early stage step in a drug discovery situation. The partners may involve commercial organisations, academic labs, and individual consultants who form a Virtual Organisation (VO) to collaborate on running the project, that typically has been historically carried out at a pharmaceutical organisation. The knowledge and experience of the partners involved is a critical resource and success factor for the project as is the ability to collaborate effectively. Additional resources include computer software and machinery for molecular design, modelling and virtual screening, experimental lab facilities for running assays and experiments on predicted hits for the problem studied, and supporting Information and Communications Technology (ICT) infrastructure. A significant amount of activity involving analysis, interpretation of results, synthesis and discussion is involved in many steps of the research process.
Computer-based models of Protein targets, Protein-Ligand and Protein-Protein interactions are built based on existing knowledge from crystal structures, physical chemistry and applications of bioinformatics and cheminformatics methods. A variety of methods including virtual screening, docking, pharmacophore-based design and free energy simulation methods are applied to the design of drug candidate molecules and their affinity for the target based on interactions such as involving specific hydrogen bonding and hydrophobic interactions with the active site of an enzyme. Holistic approaches to design also take into account specificity, cross-target interactions, Lipinski’s rule of 5 on druglikedness, ADME and toxicity properties of candidate molecules. Predictions are tested in the laboratory using a variety of experimental screening methods. High Throughput Screening (HTS) can be used to examine the activities of libraries of molecules against a target, whereas High Content Assays may probe a specific toxicity mechanism and property of a molecule.
A Lessons Learned process is run at the end of every significant process in the collaborative research workflow and prioritised lessons are documented into the VO knowledge base. Best Practices are agreed and documented at the start of the project. If best or better practices are discovered during the Lessons Learned process (e.g., on discussing “what went well”), they are documented into the VO knowledge base for future reference.
A complex event-driven engine is used to track all significant events occuring during the collaborative work and to provide recommendations with regards to traffic light situations (e.g., green: positive, red: negative, yellow: uncertain) where yellow situations may trigger discussion and further actions. The combination of people and infrastructure may evolve and improve as activity expands, thus becoming a Collaboration Learning Machine for Drug Discovery and Neglected Diseases Research.
I will discuss the activities of the Scientists Against Malaria (SAM) consortium at the BIO-IT conference in Boston, taking place 12 – 14 April 2011 in its collaborative drug discovery session (http://www.bio-itworldexpo.com/Bio-It_Expo_Content.aspx?id=101305). SAM was formed in 2010 from the InnovationWell Neglected Diseases Collaboration Pool as a virtual drug discovery organization to collaborate on the design of kinase inhibitors against the Plasmodium Malarial parasite. Work activities have included target selection and modelling, protein expression and assay development, computational drug design, and screening. Supported by developments on the EU FP7 funded SYNERGY and OpenTox projects, a combination of interoperable information systems, ontologies and web services were designed and deployed to manage the data, documents, computational and assay results, activity and toxicology predictions, as well as dashboards to track project progress and to support decision making. We will discuss our results, experiences and lessons learned to date, and future directions and opportunities for collaborative drug design based on our virtual organization approach.
On Sunday May 30 we host an OpenTox Workshop near Berlin in Potsdam that will bring many leading international research program directors and leaders together to discuss how collaboration and the increased linking of resources over the World Wide Web could progress human safety research and safety assessment. By linking resources and data increasingly powerful computer-based models can be built for predicting and avoiding unwanted adverse toxic side effects of drugs, chemicals, ingredients in soaps and cosmetics, pesticides etc. thus enhancing human safety and protecting the environment better. Such methods should also eventually lead to the replacement of many animal experiments.
During the workshop we will apply a variety of design, informatics and modelling methods to predictive toxicology problems guided by workshop leaders with expertise in the approaches used. A case study approach will additionally be followed so that groups can work together throughout the week on their case study problems. The case studies will also be developed virtually before the workshop week with support extended afterwards for further work including experimental testing of interesting results and hypotheses developed. The virtual aspects of the case study work will additionally be supported by the Synergy and OpenTox infrastructures and related Collaboration Pool and pilot collaboration study.
Case studies will focus on the development of innovative integrated testing strategies applied to the problem of predicting the toxicity of a molecule. Such strategies are becoming an increasingly important part of drug design strategies so as to remove toxic liabilities as early as possible in the design processs. REACH legislation is also requiring organisations in coming years to carry out a more extensive safety testing of all chemical ingredients in a variety of products ranging from consumer products to food to agrochemicals. Related to this activity is the relatively unsatisfactory use of animal experiments to predict human toxicity, which are not only complex and expensive, but also often do not predict human effects well, if at all. Hence new approaches combining computational modelling, in vitro assays, systems biology, stem cell technology etc. are required.
We will apply techniques to the study of existing knowledge (e.g., from adverse events, biological literature, pathway models etc.) to help support mechanism-based hypotheses and strategies. Modelling techniques based on data-mining, database searching, and read-across will be applied to chemical categories. Integrated QSAR-based models supported by the new OpenTox infrastructure will be used to build properly validated models including estimation of applicability domain. We will also research ADME and kinetics properties of structures as relevant to their toxicity profiles. We will attempt to predict primary metabolities based on P450 metabolism simulation and model the potential toxicities of metabolites. Population-varied physiologically-based ADME Simulations will be carried out for in vitro-in vivo extrapolation, exposure estimation and to study the variation across individuals and populations. We will also apply workflow techniques to the combination of methods and Bayesian networks to the evolution of weight of evidence based consensus predictions.
The most promising strategies and predictions developed will be used to design experimental human toxicity-oriented in vitro assays which will be run after the workshop as part of the virtual case study extension work. Both computational and experimental work will be documented according to industry best practices in a collaborative electronic laboratory notebook. We will attempt to develop new combined in silico - in vitro strategies superior to existing approaches which should help advance the field and industry testing and regulatory needs.
Through the time spent working and discussing together combined with the availability of a variety of leading software and expert support from workshop leaders, workshop participants should take home ideas and learning to help accelerate their own projects related to safety design and risk assessment. The location and atmosphere in Oxford is also an ideal background for networking, getting to know your peers and joining the ongoing eCheminfo community of practice activities. As is often common with eCheminfo gatherings the workshop usually attracts a variety of backgrounds including industry, academia and government research instititutes and from many different countries. We also welcome the participation of non-modelling specialists from different areas of chemistry, biology and toxicology to participate and bring an interdisciplinary interaction to the collaborative group work.