The detailed program for the eCheminfo Predictive Toxicology & ADME Workshop which will take place in Oxford 2 - 6 August 2010 is provided below.
eCheminfo Oxford 2010 Workshop
Predictive ADME & Toxicology Workshop
Medical Sciences Teaching Centre, Oxford University, Oxford, UK
2 - 6 August 2010
Download a copy of the program as a pdf:
Detailed Workshop Program
Monday August 2
08.30 Registration Open
09.00 Overview of Workshop Activities and Case Studies, Presented by Barry Hardy (Douglas Connect)
We will review the approach to the workshop including group work and case studies.
09.30 Human Adverse Event Data Mining, Led by Jeff Wiseman (Pharmatrope)
The FDA has made a concerted effort over the last decade to systematize the recording of adverse event data and enhance its utility for data mining and drug design. Pharmatrope’s new Titanium™ database condenses and filters the FDA data to report the 100,000 drug-adverse event associations that have been identified as statistically significant from a total of 8,000,000 drug-adverse event relationships reported for the 1800 marketed drugs. The database has been applied to construct fragment-based QSAR models for 650 adverse events across the full spectrum of toxicity classes.
These models are tuned for maximum utility in drug discovery by minimizing the prediction of false positives. This is accomplished by removal of random noise from the underlying data. This tuning is also applied to identify significant adverse event-adverse event links based on their common interactions with drugs. This data-driven clustering of event-event relations maximizes the signal that relates chemical substructure to adverse events and allows us to begin distinguishing adverse events that are linked to off-target activities as opposed to the primary pharmacological activity of the drugs.
For the Oxford workshop we will describe the rationale underlying the construction of the Titanium system and provide examples of its application. We will then provide a walk-through of representative applications to structure-based prediction of toxicity and to investigation of underlying mechanisms of toxicity. The walk-through will include demonstrations of the integration of Titanium with commonly available data mining tools. With this background Titanium will be made generally available for application to the predictive toxicology case study work of the workshop.
13.00 Building and Testing Mechanism-Based Models of Chemical Toxicity using Data from the U.S. EPA ToxCast Program, Led by Richard Judson (U.S. EPA, National Center for Computational Toxicology)
The U.S. EPA ToxCast program is using in vitro assay data and chemical descriptors to build predictive models for in vivo toxicity endpoints. In vitro assays measure activity of chemicals against molecular targets such as enzymes and receptors (measured in cell-free and cell-based systems in binding, agonist and antagonist modes), in addition to cellular phenotypes and pharmacokinetic-related parameters. Over 600 separate assays were run in concentration response format to derive AC50 or LEC values for each chemical-assay combination. This collection of data are being used to build predictors of in vivo toxicity endpoints derived from chronic/carcinogenicity studies in rats and mice, prenatal developmental toxicity studies in rats and rabbits, and two-generation reproduction studies in rats. The models use both raw assay data as inputs, but also derived parameters such as pathway and disease perturbation scores that summarize chemical activity across either published pathways or disease-gene collections. Also used are combinations of potency data (concentrations at which chemical activity turns on) and efficacy (magnitude of effect), plus variance around mean estimates. This session will show how to interpret the ToxCast data and how to use the data to build signatures of endpoints including liver tumors, cleft palate and reproductive fitness. Users will run simple models and learn to judge the value of the models in terms of specificity, sensitivity and other statistical metrics, and how to balance statistical and biological merits. These models are put into the context of prioritization for further testing, and initial prioritization plans will be discussed. The example analyses will be carried out using a set of R functions, but no R programming experience is necessary.
This work may not necessarily reflect official Agency policy.
16.00 Group Work and Discussion on Workshop Case Study Problems
18.00 Poster Session with Refreshments and Food
Tuesday August 3
08.45 Using Database look-ups, Read Across, and Predictive Models to assess the Toxicity of Compounds, Led by Glenn Myatt (Leadscope)
This workshop will review methods for assessing the potential toxicity of compounds using a combination of in silico methods. Sources of toxicology information will be described and methods for integrating such data will be discussed. This will include the use of the public ToxML data exchange format for integrating toxicology databases from a variety of sources. Common approaches for searching these databases include database look-ups and searching for chemical analogs. We will use the Leadscope software for hands-on experience of searching toxicology databases. Looking up information by chemical name or IDs along with different structure searching methods (exact, family, substructure, and similarity) while simultaneously searching over toxicity fields will be presented. Analysing the chemical and toxicity data will be demonstrated with exercises, along with approaches to viewing the information for read across purposes. Where there is no information on a specific chemical or any close analogs a number of approaches to predicting its toxicity will be reviewed, including alert-based methods and QSAR models. These approaches will be described, along with methods and issues to consider when preparing data for data mining alerts or building prediction models. The Leadscope software will be used for preparing datasets for analysis, identifying structural alerts, and building QSAR models. These models will be built with a variety of QSAR properties and substructural features. Partial logistic regression methods will be used to model categorical data and partial least squares regression will be used to build continuous data models. Methods for selecting features and refining model performance will be described. Issues concerning how to assess the applicability domain for the models as well as model transparency will be reviewed. The Leadscope software will be used to create predictive toxicity models, to explore issues concerning building and applying the models, and will be used by the class work groups for application on their workshop case studies.
11.00 Group Work on Workshop Case Study Problems
13.30 Integrated QSAR-based Model Building, Applicability Domain Estimation, Validation and Reporting, Led by Nina Jeliazkova (Ideaconsult)
(Q)SAR modelling, or the pursuit of correlations between structures of chemical compounds and biological activities has a long history. A multitude of commercial and freely-available software packages for performing related tasks are available. The three most important aspects of the model building are i) the availability of activity data (toxicological, pharmacological, toxico- or pharmaco-kinetic), ii) methods and algorithms for calculation of physicochemical or structural descriptors from the chemical structure, and iii) statistical methods to determine and validate the relationships. The prevailing modelling practice of gathering data manually from different sources and submitting it to some preferred (Q)SAR modelling tool makes the process labour intensive and error prone. Moreover it is difficult to integrate predictions from diverse programs. The OpenTox Framework, developed by partners of the FP7 OpenTox project (1), aims at providing a unified access to toxicity data, (Q)SAR models, procedures supporting validation and additional information that helps with the interpretation of the (Q)SAR predictions. This is achieved on two levels: i) a common information model, based on ontologies, and ii) availability of data and methods via a standardized REST web services interface, where every compound, data set, descriptor calculation algorithm or statistical method has a unique web address, which is used to retrieve information or initiate the calculations. REST web services provide convenient technology to incorporate existing diverse IT solutions into an interoperable system, as the data access and calculation procedures could be implemented by different software packages and reside on remote internet locations. The OpenTox framework allows access on several levels, either by a user-friendly interface for toxicological experts or model developers, or by an application programming interface for development, integration and validation of new (Q)SAR algorithms. A web application, utilizing the OpenTox framework, and implementing an integrated approach towards collaborative (Q)SAR modelling will be presented and used in hands-on exercises. OpenTox applications and services will also be used in the support of the case study work carried out by the work groups during the workshop week.
(1) OpenTox - An Open Source Predictive Toxicology Framework, is funded under the EU Seventh Framework Program: HEALTH-2007-1.3-3 Promotion, development, validation, acceptance and implementation of QSARs (Quantitative Structure-Activity Relationships) for toxicology, Project Reference Number Health-F5-2008-200787 (2008-2011). More information at www.opentox.org
16.00 Group Work on Workshop Case Study Problems
18.00 End of Workday
18.00 Punting (weather permitting)
Wednesday August 4
08.45 Population-varied Physiologically-based ADME Simulation for in vitro-in vivo Extrapolation and Exposure Estimation, Led by Sebastian Polak and David Turner (Simcyp)
Simcyp provides a platform for the modelling and simulation of drug absorption, distribution, metabolism and excretion (ADME) in virtual populations. Simcyp provides complex mathematical models suited to the in vitro-in vivo extrapolation (IVIVE) of experimental data so as to enable exposure estimation and variation. The complex set of models utilises physico-chemical data and information describing metabolism, induction and active transport.
A key component determining the potential toxicity of an exogenous compound is exposure. Exposure is determined by a variety of factors including; the entry point(s) into the body, the dose, the rate at, and extent to, which the dose reaches the site(s) of toxicity, and the rate at which the compound and its metabolites are cleared. Pharmaceutical dosing may be intravenous (IV), oral (po), dermal, or via the lungs etc. and may be single or multiple bolus doses or continuous (IV infusion, dermal). Given the appropriate physico-chemical and in vitro data, it is possible to use mechanistic IVIVE methods to predict the time-dependent exposure of body organs to administered compounds. This “bottom-up” approach does not require in vivo data and is built into the Simcyp Simulator (Jamei et al., 2009a).
It is well known that there can be inter-individual variability of exposure linked to a huge variety of factors including sex, age, size (weight/height), metabolic enzyme phenotype/ genotype and abundance, a variety of factors related to oral absorption (gastric emptying, intestinal transit, luminal pH etc) and so on. This “system” information is stored within “Population” databases which are tailored according to ethnicity and disease state (Jamei et al., 2009b). Critically, wherever possible the co-variation of these “system” parameters is built into the program thus enabling the simulation of realistic virtual individuals. When coupled with the requisite compound-specific information, IVIVE techniques can be applied to assess the potential inter-individual variability of exposure via virtual trials. It can then be possible to identify the characteristics of individuals who may be at particular risk to over-exposure without the difficulties (sparse data, lack of signal) often associated with “top-down” POP-PK approaches (Aarons, 1991). In addition to system-derived variability, co-administered drugs, or environmental factors such as food components, can add significant inter-individual variability in exposure due to metabolic or transporter drug-drug interactions (DDIs).
With respect to exposure to environmental compounds, dose is not usually controlled and there can be a variety of entry points into the body. However, where these factors can be quantified, it is then possible to use population-based IVIVE techniques to assess potential exposure.
Within this workshop we will apply mechanistic IVIVE methods to case study problems including simulating aspects that determine the mean and inter-individual variability of exposure including the impact of DDIs.
Aarons L (1991) Population pharmacokinetics: theory and practice. British Journal of Clinical Pharmacology 32(6):669-670.
Jamei M et al. (2009a) The Simcyp((R)) Population-based ADME Simulator. Expert Opin Drug Metab Toxicol 5(2):211-223.
Jamei M et al. (2009b) A framework for assessing inter-individual variability in pharmacokinetics using virtual human populations and integrating general knowledge of physical chemistry, biology, anatomy, physiology and genetics: A tale of 'bottom-up' vs 'top-down' recognition of covariates. Drug Metab Pharmacokinetics 24(1):53-75.
11.00 Group work on case study problems
13.00 Using Computational Chemistry to Predict Drug Metabolism Mediated by Cytochromes P450, Led by Patrik Rydberg (University of Copenhagen)
Almost all drug metabolism in phase I is caused by the cytochromes P450 enzyme family. These enzymes can oxidize almost any site on a molecule, and creates more hydrophilic products which the body can more easily excrete. The major reactions performed by P450s are hydroxylations, dealkylations, epoxidations, and heteroatom oxidations. The major challenges in predicting the metabolism mediated by this enzyme family is that the various isoforms in the family contribute to the metabolism of different drugs, they have flexible active sites of different shape, and the reaction is caused by an iron-containing heme molecule, making simple assumptions with regard to reactivity invalid in many cases.
Thus, there are multiple things that should be included in the perfect prediction method (which unfortunately does not exist yet): prediction of which isoform will contribute most to the metabolism of a specific drug, prediction of how the drug will bind in the active site in this isoform, what site on the drug will be oxidized, and prediction of what metabolite will be formed by this drug.
During the last decade many methods for the prediction of this metabolism have been developed, using various computational techniques. The binding can be modeled using docking, shape matching, pharmacophore models, and even molecular dynamics. The reactivity can be computed using quantum chemical methods, semi-empirical methods, or by fragment matching using pre-computed energies. Using statistical methods, one can also relate substrate properties to metabolites.
The various programs available make different approximations to overcome the challenges described above using one or combinations of several techniques and all have different limitations. Thus, the performance of the methods can vary quite a lot depending on dataset and application.
In the workshop the participants will learn how to identify potential metabolites, and how to design compounds to overcome potential drug development problems. The metabolism of drugs which are known to undergo P450 metabolism will be discussed and predicted using SMARTCyp. The advantages and limitations of different methods, how to choose methods, and how to combine them to get the best possible results will be discussed and tested using SMARTCyp and other available programs.
15.00 Knowledge-based Reasoning for the Prediction of Toxicity and Metabolism, Led by Liz Hardy (Lhasa Limited)
This workshop will look at non-statistical approaches to the prediction of toxicity and metabolism. Programs from Lhasa Limited for predicting toxicity (Derek for Windows) and mammalian metabolism (Meteor) will be used for illustration but the main aim of the workshop will be to explore the scientific case for reasoning-based prediction and its implications for computer methods.
The workshop will look at how human experts use reasoning to make predictions about chemical toxicity and xenobiotic metabolism, and how some of their thinking has been incorporated into knowledge-based computer programs.
Workshop participants should gain an understanding of how human knowledge is incorporated into knowledge-based systems and how to judge and compare predictions from reasoning-based and statistical models.
17.00 Group work on case study problems
18.00 End of Workday
Thursday August 5
8.45 Datamining and Predictive Toxicology Workflows, Led by David Leahy (Discovery Bus)
The session will demonstrate best practice workflow approaches applied to automating QSAR modelling with examples using the Inkspot Science workflow composition, hosting and collaboration systems and ADMET datasets. The Inkspot hosted system makes it easy for the user to compose and test QSAR modelling methods and explore modelling parameters without programming and in an environment that supports the sharing of methods and results. We will also review QSAR modelling results generated by the Discovery Bus, a highly automated combinatorial workflow system that has been applied to the generation of multiple QSAR models for more than 10,000 datasets from the ChEMBL structure-property database to compare modelling method performance and questions of model validation and applicability. The workflow approach supported by such data and models will be applied to workshop case study problems.
11.00 Group Work on Workshop Case Study Problems
13.30 Weight of Evidence approaches: from Consensus Models to Bayesian Integration Methods, Led by Jin Li (Unilever)
Weight of Evidence (WoE) approaches reported in the literature range widely from subjective and qualitative to quantitative. The approaches usually refer to the type of consideration for a situation with uncertainty, which often have to deal with evidence or information available in favour or against some hypothesis, in order to ascertain which side is greater. Although WoE approaches remain lack of consensus in form, researchers tend to agree that a useful WoE approach should be capable of weighing both qualitative and quantitative evidence / information in an integrative manner. Bayesian methods have emerged as a promising methodology in this regard.
Bayesian methods originated from a posthumous publication in 1763 by Thomas Bayes. The methods start with a suitable prior distribution, combine the prior distribution with the data available to produce a posterior distribution, and finally analyses and draws conclusions based on the posterior distribution. Bayesian methods have a number of useful features: 1) they allow for the formal incorporation of subjective qualitative expert knowledge; 2) modelling conclusions can be updated sequentially by newly arrived evidence/data; and 3) they act as a science-based and objective consensus model building tool.
In this workshop, we will first introduce the basic concepts of Bayesian approaches. We will then talk through a categorical Bayesian integration framework which is capable of combining a battery of tests with positive or negative results either from predictions of in silico methods or from in vitro assays. While we will be working through some examples in predicting mutagenicity and skin sensitisation of chemicals, the advantages of Bayesian approaches will be highlighted in comparison to some traditional common consensus models. We will also introduce a slightly more advanced Bayesian approach called Bayesian Networks which is applicable to the integration of varied data from diverse sources without some of the restrictions of the categorical Bayesian framework. Class members will apply the approaches to their case study work.
16.00 Group Work on Workshop Case Study Problems
18.00 End of Workday
Friday August 6
9.00 In vitro Assays that Predict Mechanisms of Human Organ Toxicity, Led by Katya Tsaioun (Apredica)
De-risking of drug discovery programs is a top concern for most pharmaceutical companies. Reducing the cost of R&D while continuing to bring innovative products to the market sounds difficult if not impossible. Are there strategies beyond staff reduction that could help achieve this goal? This workshop will discuss in vitro ADME and Toxicity assays and their validation, how they are used in the drug discovery process and how they could be used as part of de-risking the drug candidates. We will discuss strategies of using in vitro toxicology data in parallel with in silico approaches that form a new paradigm for drug discovery. The key questions we will discuss will be:
• Which in vitro assays have enough validation behind them?
• How can the information obtained from the assays be used in conjunction with in vitro and in vivo efficacy and other ADMET data?
• How do we define safety window in the absence of human in vivo efficacy data?
In conclusion, we'll present new approaches and case studies describing in vitro and in silico ADME-Tox strategies that de-risk drug discovery programs. Paths forward will be discussed with new tissue models and mechanisms that the industry is building in order to develop more predictive tools.
10.00 Design and Planning of Experimental Assays for Testing Predictions
Together the group will discuss the design and planning of assay experiments for the testing of predictive models developed from the workshop case study work, which will then be carried out after the workshop.
11.00 Group Work on Workshop Case Study Problems
12.00 Group Presentation of Workshop Case Study Results
14.00 Group Work on Workshop Case Study Problems
16.00 Group Presentation of Workshop Case Study Results
17.00 End of Workshop
More Information & Registration
More information on program at http://echeminfo.com/COMTY_oxfordadmet10
or contact echeminfo -at- douglasconnect.com with your inquiries or to register.