John Libbey Eurotext

Environnement, Risques & Santé


Méthodes haut-débit en toxicologie et en évaluation des risques sanitaires Volume 16, numéro 1, Janvier-Février 2017


  • Figure 1
  • Figure 2


Toxicology can be defined as the science that examines the biological repercussions of exposures to exogenous substances on living organisms [1]. Its main current societal goal is to develop reliable predictions of the human health impact of exposures to chemicals even before such events occur [2]. Historically, experimental observations of toxicity, first described by Paracelsus ca. 1534, were re-framed into proper test methods during the 20th century [3]. Those methods mainly consisted in measuring adverse health outcomes in homogeneous animal groups at lethal or near-lethal doses [4] and extrapolating them empirically to potentially estimate safe doses in humans [5]. Since the 1940s, the basic experimental protocols for assessing the effects of chemicals on health have changed little. Whereas that approach has provided very important results through a century so far, it is still costly and resource-intensive [6]. In numbers, the global yearly expenses on animal experimentation reach about €10 billions, 20% of which for toxicology alone, sacrificing 100 millions animals worldwide every year [7]. Moreover, animal studies are low-throughput, too slow to screen the more than 80,000 chemicals already commercialized and the new chemical entities reaching the market every year [8]. In addition, animal to human transposition is not always reliable and is affected by many uncertainties. We are not 70 kg rats: basal metabolic rates and metabolic pathways are among the major species-specific differences making inter-species transposition difficult and imprecise [9]. Besides, the extrapolation from the high-dose effects to low-dose responses is very difficult to validate. Finally, standardized animal tests make it difficult to take into account metabolic differences between different age groups and inter-subject variability in human populations [10], even though progress has been recently made in that area [11].

The aforementioned hurdles created pressure to develop human-cell-based models. The need for a paradigm shift in toxicology started to emerge around 1980 [12]. The “3R's” principle of replacement, reduction and refinement had not gotten much echo in toxicology until that moment, at which scientific and technological advances, financial, ethical and legislative imperatives converged.

Currently, European Union (EU) and United States of America (US) legislative and scientific initiatives are running in parallel, pushing for change since the beginning of the 21st century. Advances in molecular biology, cell biology (with stem-cells technologies [13]), bioinformatics, systems biology and computational toxicology, introduced innovative methods, steering toxicology away from animal-based low-throughput in vivo methods toward a high-throughput mechanistic science [14]. High-throughput screening (HTS) tests offer a rapid examination of thousands of single agents or complex mixtures per day at relevant exposure levels [15]. HTS assays using human primary cells or cell lines allow the investigation of toxic effects in humans from different life stages and ethnicities [1]. With the support of in silico (mathematical) methods, HTS has the potential to largely improve the human health risk assessment of chemical agents [16, 17].In parallel, exposure assessment has considerably evolved since its first steps in the 80's [18]. Financial and human efforts were made to produce representative and adapted data to refine exposure assessment [19]. Mathematical modeling and computational capacities also made possible to introduce probabilistic methods to account for population behavior and chemical levels variability instead of using mean or maximum point estimates [20]. Tiered exposure and risk assessment strategy were elaborated to guide risk assessors in identifying priorities for risk management taking into account available information and uncertainty [21]. New data such as biomarkers of exposures from human cohorts require the use of toxicokinetic models to be linked with external exposure assessment and to be interpreted in risk management. Moreover, regulations pressures such as the Registration, Evaluation and Authorization of Chemicals (REACH) regulation and risk assessment recommendations [21] push risk assessors to develop high-throughput methods to assess exposure to a large range of chemicals while accounting for all sources and routes of exposures.

In this review, we will examine the scientific and regulatory factors that led to development of HTS, take a closer look at the assays and techniques used, investigate the limitations and challenges of this innovation and discuss the steps to take ahead to reach a well founded high-throughput risk assessment (HTRA) practice.

HTS development context

The HTS concept was not common until 1988 when it was presented at the annual conferences of the American Society for Microbiology and Society for Industrial Microbiology (SIM), and at the time was only concerned with “natural products”. The SIM further sponsored two “Screening Forum” meetings, in Vienna in 1993 and Princeton in 1994 [22]. By 2005, HTS had evolved, making its way into the pharmaceutical industry [23, 24], and being later adapted to agrochemicals [25, 26].

However, toxicological research did not evolve by virtue of innovation alone. Several EU and United States (US) initiatives ran in the same direction, pushing for change since the beginning of the 21st century [6] (figure 1). We focus next on those efforts, noting that Japan has also followed a bit later [27].

In the European Union

The 7th Amendment to the cosmetics directive

On January 15, 2003, the European 7th Amendment (2003/15/EC) to the Cosmetics Directive (76/768/EEC) restricted the use of animals in all cosmetic testing [28]. It also set a time frame for the development of eventually validated alternative methods for toxicity testing [29]. In 2009, a first restriction on acute toxicity animal-based testing took effect [4]. By 2013, by European law, all new cosmetic ingredients intended for the European market had to be animal-test-free. That legislation has become a motor of change, and pushed for the development of eventually validated alternative testing strategies [30].

The REACH regulation

Adopted by the European Commission in 2003, and implemented in 2007, REACH established a local regulatory framework for the safety assessment of chemicals produced or imported in quantities greater than one ton per year [31-33]. It calls for the development of in silico and in vitro testing methods, integrated toxicity testing strategies, keeping in vivo experiments as a last resort. That comprehensive program aims at evaluating the risks of more than 30,000 synthetic chemicals already in use in Europe by 1 June 2018 [34].

Other European Union initiatives

European actions have not only been legislative or regulatory. The FP6, FP7 and H2020 EU research programs have accompanied legislation consistently by pushing for the development of corresponding knowledge and technologies. The EU has funded and launched many large-scale projects with different themes: AcuteTox ( in acute toxicity alternative testing, SCREEN-TOX ( and StemBANCC ( in stem-cell technology, COSMOS ( in computational modeling, NOTOX ( in systems biology, TOXBANK ( in integrated data analysis, the SEURAT ( cluster and EU-ToxRisk ( in predictive toxicology etc.

In the United States of America

The National Toxicology Program (NTP) road-map

Aware of the above-mentioned development, the NTP proposed in 2004 a road-map for the future of toxicology testing entitled “A national toxicology program for the 21st century” [8], which called for a shift from observational methods towards more predictive, target-specific and mechanism-based alternative assays. It also placed the emphasis on tools like physiologically based pharmacokinetic (PBPK) modeling [35] and quantitative structure-activity relationships (QSAR) to better support quantitative risk assessment. In 2005, the NTP initiated a collaboration with the National Chemical Genomics Center (NCGC) to develop chemical libraries and HTS assays [1, 5].

The Environmental Protection Agency (EPA) ToxCast program

The “Toxicity Forecaster” (ToxCast) is a multi-year research program launched in 2007 by the EPA to run automated HTS in vitro assays and in silico analyses for prioritizing further toxicity assessments of chemicals [31]. It is based on bio-activity profiling of chemicals and screening changes in cells’ or proteins’ activity after exposure, with the ambition of picking out “remarkable” toxicity off the mass of data accumulated. Another goal is to establish causal links between eventual exposures and effects on biological pathways and targets [36]. Obviously, the latter calls for the development of high throughput exposure, toxicodynamic and toxicokinetic models [37].

The 309 chemicals of ToxCast phase I were mostly well characterized (by animal-based toxicity assays) agrochemicals or food-crop pesticides [38]. For those molecules concentration-response data were obtained in more than 600 cell-based or biochemical assays [39-41]. In ToxCast phase II, another 767 chemicals from a broad range of sources, including some failed pharmaceuticals, food additives, nanomaterials etc. were screened in almost 700 HTS assays [36]. All the data collected are available online ( in a variety of formats.

The National Research Council (NRC) vision and strategy report

Commissioned by the EPA in 2005, the NRC report entitled “Toxicity testing in the 21st century: a vision and a strategy” proposed to government, academia, and industry, a paradigm shift in toxicology through the application of emerging disciplines and technologies (omics, systems biology, computational modeling, etc.) [15, 42]. The proposed approach advocates heavier use of mechanistically informative in vitro assays to study how chemicals interact with cellular response networks and turn them into toxicity pathways [43].

The report considers four options for toxicity testing (table 1): option 1 represents the status quo and primarily relies on animal-based in vivo tests; option 2 takes into consideration the available information on the substance studied and its mechanisms of action, and is already operational. The remaining two options respond to the NRC calls at two degrees: the extreme option 4 calls for an in vivo-free strategy (as envisioned in the European Union for cosmetics’ ingredients), the intermediate option 3 leaves open the possibility of using animal-based tests in complementarity to innovative mechanistic approaches [44].

Toxicity testing in the 21st century (Tox21)

Tox21 is another collaborative testing and evaluation program that was established in 2008 via a Memorandum of Understanding between the NTP, the NCGC, and the EPA, later joined by the US Food and Drug Administration [45]. Tox21 is ongoing and its chemical library contains over 8,000 chemicals of different kinds (pesticides, marketed pharmaceuticals, food additives, industrial chemicals, cosmetic ingredients, chemicals found in household products and clothes etc.) [46].

In vitro HTS methods

The field of toxicology has significantly evolved as we have seen above, with the progressive introduction of in vitro and in silico methods and within each of those technical fields [47]. For instance, cell count and lactate dehydrogenase activity in the culture medium were at some point the only cytotoxicity endpoints measurable in vitro[48]. Nowadays, different cell death pathways are known and their activation can be followed using many cellular biomarkers, with high throughput [34].

In vitro HTS has both qualitative and quantitative advantages. “HTS is a relative concept[49], but quantitatively, it can be defined as the set of screening techniques that can be scaled up to test libraries of molecules at a rate exceeding thousands of structures daily [50], in a concentration–response format [42], using standardized protocols [6]. Qualitatively, a distinct advantage of HTS is its ability to test complex mixtures, combine experimental conditions and end-points to develop extensive dose-response relationships for different pathways across large concentration ranges [51] for different exposure schedules [52]. That relies on rigorous robotic spotting technologies, miniaturization of the assay vial (now micro-plates) and automation [10]. The capacity of the micro-plates has significantly increased with time. From 96-well plates, originally used in virology [53], to 384- and 1,536-micro-well plates currently in use [54], the equipment has been gradually improved to test more molecules and concentrations [34]. The volume of the wells in a micro-plate has also decreased, down to volumes as low as 2μL [55]. Obviously the sensitivity of the detection methods had to improve in parallel. HTS assays are either cell-free/biochemical or cell based, or even organism-based, as explained below.

Cell based assays

The concept of HTS cellular assays is based on measuring the activity of cellular signaling pathways disturbed by chemical exposures [42]. These tests can be divided into various categories. Second-messenger assays are used to measure receptor stimulation or ion-channel activation. They monitor signal transduction from stimulated cell-surface receptors by measuring fluorescent molecules that respond, for example, to intracellular calcium ion concentration changes [56]. Reporter gene assays are used to measure changes in transcription/translation activity through reporter genes, like plasmids, coding for enzymes such as luciferase or green fluorescent protein (GFP), rather easy to detect [57, 58]. Other cellular-based systems are used for specific endpoints, e.g. “cell proliferation assays” that assess growth rate, or the “yeast complementation assay” (on Saccharomyces cerevisiae) that tolerates extreme chemical conditions [56]. Cell-based assays can also use different types of cells, of which human “primary cells” and “stem cells” are of particular interest.

Primary cells

Primary cells are freshly isolated from human or animal tissues and organs (liver, brain, kidney, skin...) [59-61]. Primary cells can be grown in monolayer cultures [62], but since 3D cultures (e.g., spheroids) were observed to better maintain organ-specific functions [63], cell-cell interactions, signal transduction, and gene expression [64, 65], their use in this field is increasing.

Stem cells

Many features make human stem cells or induced pluripotent stem cells (iPSC) attractive for toxicity screening. Other than their uniform physiology and donor-specific genetic profile, they have unlimited self-renewal potential and are pluripotent (and therefore differentiable into various other cells types such as hepatocytes, cardiomyocytes, neurons etc.) Human stem cells can be derived from embryonic cultures (isolated in the inner cell mass of the blastocyst [66]), adult tissues (bone marrow [67], skin [68], liver [69], umbilical cord blood [70], and brain [71]), or through genetic reprogramming of easily accessible cells (such as skin fibroblasts or renal epithelial cells shed in urine) for iPSC [72]. Although embryonic stem cells have a higher degree of pluripotency than iPSC, they continue to be subject of ethical debates. Furthermore, the difficulty of inducing a reliable and efficient differentiation of all cells in one culture remains a major limitation of these techniques [73], but progress is being made to alleviate that problem.

Cell-free (biochemical) assays

Although cell based assays have the advantage of being able to identify the quasi-totality of messengers related to the target of interest within the cell in a single pass [22], cell-free biochemical assays are still widely used [74]. Assay volume reduction has required improved sensitivity techniques like scintillation proximity assays and fluorescence detection methods, which are now applied in the majority of biochemical tests [56], regardless of the assay target (receptor-binding, direct enzyme activity or second-messenger being increasingly the three main kinds of biochemical assays used) [74]. The most popular fluorescence techniques are: fluorescence resonance energy transfer; fluorescence polarization, homogeneous time resolved fluorescence, fluorescence correlation spectroscopy, and fluorescence intensity distribution analysis [56].

Organism-based assays

While traditional animal models are in decline, non-mammalian animal testing is an emerging option for HTS and is getting the support and promotion of large research entities like NRC. Organisms like the zebrafish (Danio rerio) and Caenorhabditis elegans possess some physical and physiological characteristics, well conserved in mammals, which are important for a better understanding of potential health effects of some chemicals at the organism's level [75]. Zebrafish permits an organ-focused behavioral toxicity assessment as well as the simultaneous study of the toxicity of multiple systems in parallel [76]. Caenorhabditis elegans is mainly used for its special biology that is evolutionarily-conserved and very well characterized [77]. The use of both species can lead to good HTS productivity.

In silico predictive chemistry methods

In analogy with the commonly used “in vitro” and “in vivo”, the term “in silico” describes an analysis performed on a computer [43]. In toxicology, in silico techniques, also called “computational toxicology”, form a sub-discipline that uses computer and mathematical models to understand and predict the physio-pathological mechanisms of toxicity and their ultimate outcome as adverse effects [78, 79]. Computational toxicology offers remarkable possibilities by allowing the analysis of a large number of chemicals and biological interactions, yet more proof-of-concept studies are needed to demonstrate its added value and make it fully adopted by risk assessors and regulators [42].

Several in silico molecular modeling techniques can help in characterizing the effects that chemical substances elicit on biological systems. These tools range from the definition of toxicophores or pharmacophores (the set of molecular features common to chemicals exhibiting a particular biological activity) to ab initio molecular dynamics simulations of chemical reactions [80]. Among these techniques, qualitative and quantitative structure-activity relationships (Q)SARs are widely applied in toxicological research and regulatory decision-making [81]. QSARs are based on the hypothesis that similar chemicals induce similar effects on biological systems. They formalize a mathematical relationship (e.g., a linear regression) between molecular properties (e.g., lipophilicity, energy of frontier orbitals) and biological effects (e.g., genotoxicity) [81]. Because of their computational tractability QSAR models are particularly fit for virtual screening and ranking of large chemical inventories according to potential toxicological hazards [82]. In addition, QSAR models benefit from well-established and internationally approved validation principles that facilitate the mutual exchange of data and provide a rigorous framework for the analysis and critical assessment of their predictions [81]. Several examples of QSAR application to the virtual and rapid screening of chemical inventories can be found in the literature. For instance, QSAR models have been applied to the identification of putative endocrine disruptors [83] or persistent organic pollutants [84] and to the ranking of 70,893 chemicals according to their carcinogenic, mutagenic, or reprotoxic properties [85]. It is also worth mentioning that their potential in virtual screening approaches makes them particularly suited to the analysis of chemical hazards resulting from natural or human-made disasters [86].

Last but not least, QSAR models are essential components of computational toxicology frameworks since they can be used to predict parameter values for other models, such as pharmacokinetic and toxicokinetic models [87].

High-throughput exposure (HTE) assessment

HTS in toxicology described previously allows prioritization of chemicals for potential hazard. In order to move to risk-based prioritization, estimating exposure is also necessary as highlighted in the NRC report “Exposure science in the 21st century: a vision and a strategy” [88]. Exposure assessment aims to quantify the amount of an agent (e.g., chemical substance) that reaches individual subjects. In 1994, the European Commission defined exposure assessment as “the determination of the emission pathways and rates of movement of a substance and its transformation or degradation, to estimate the concentrations/doses to which human populations or environmental spheres (water, soil, air) are or may be exposed[89]. Various high-throughput methods are being developed to improve exposure assessment considering different sources and routes, chemical mixtures, variability and uncertainty.

External exposure assessment

Ideally, the personal exposure of an individual is quantified at the contact points of the body (skin, mouth, nostrils) using for example badges for radiation exposure or skin patches for dermal exposures, so that both concentration and time of exposure are measured and integrated. However, such exposure measurements are not available in most cases for risk assessment, especially when dealing with many chemicals for a high-throughput exposure (HTE) assessment. In that case, exposure is most of the time assessed from indirect data and mathematical modeling. Exposure models combine data on the concentrations of chemicals in various media (e.g., soil, water, food, etc.) and data on individual behavior (e.g., food consumption, time spent in an area, etc.) Those models allow for an indirect assessment of the external exposure (concentration in the media in contact with the body) [90]. Several kinds of exposure assessment are possible and, depending on the chemical, different pathways of exposure can be considered [91]. For adult humans, the most common exposure pathways are inhalation (of outdoor or indoor air, or of house dust) and ingestion of food and drinking water. Exposure resulting from skin contact can also be important in some cases. For children, ingestion of dust and soil due to hand-to-mouth activities may also contribute to exposure for some chemicals. Considering the simultaneous exposure to more than one chemical, from the same or from different exposure pathway(s), it is referred to as “combined exposure or co-exposure assessment[92, 93] and the associated chemicals form a “mixture”. Considering more than one pathway or more than one source of exposure for a given chemical, is referred to as “aggregated exposure assessment”. When several chemicals are shown to share a similar mode or mechanism of action (MOA), they can be placed in cumulative assessment groups (CAG). Exposure to chemicals belonging to the same CAG is then termed “cumulative exposure[94].Models such as Risk Assessment Identification and Ranking (RAIDAR) [95] or The United Nations Environment Program and Society for Environmental Toxicology and Chemistry Toxicity Model (USETox) [96] which predict chemical fate and distribution in major environmental media (e.g., air, water, etc.) using theoretical exposure scenarios and variables (volume of production, chemical uses, etc.) can be used to rank and prioritize chemicals on the basis of exposure [97]. In this case, the concentrations per unit emission in various environmental media are predicted, additional assumptions or scenarios are used to predict multiple human exposure pathways (i.e., inhalation, water ingestion, and various food ingestions) to yield an overall population intake fraction. In the ExpoCast project, a high-throughput process for exposure-based chemical prioritization was proposed based on these two models [97]. A total of 1,936 chemical species were evaluated using USEtox and RAIDAR and an indicator for indoor and/or consumer use. To validate the results and quantify the uncertainty for prioritization, those predictions were compared to exposures inferred by Bayesian analysis from urine concentrations for 82 chemicals reported in the National Health and Nutrition Examination Survey (NHANES). This study was later expanded to 7,968 chemicals [98].

Focusing on dietary exposure, information about the consumption of individuals in a population is generally provided by the results of surveys, where data related to the dietary behavior of a large sample of the population of interest over short periods of time are collected. Occurrence of the chemicals in media can be estimated from measurements (monitoring data or experimental studies) or from reported usages levels. For example, monitoring programs are conducted in many European countries to measure the level of many chemicals in foods. The European Food Safety Authority (EFSA) collects on a continuous basis all available data on the occurrence of different chemical contaminants in food and feed since 2010 (Articles 23 and 33 of Regulation (EC) No 178/2002). Occurrence of many chemicals can also be provided by specific studies. Such studies are valuable and cost effective complementary approach to food surveillance and monitoring programs to assess the presence of many chemicals in the population diet [99] and can provide reliable data to perform HTE assessment. The European Total Diet Study (TDS), for example, selects, collects and analyzes commonly consumed food purchased at retail level on the basis of food consumption data to represent a large portion of the typical diet. The food is processed as for consumption, pooled into representative food groups, and analyzed them for harmful and beneficial chemical substances [100]. The TDS has so far established a list of more than 445 chemicals classified into eight main groups of relevance for food safety. External exposures or intakes are then assessed by combining data on chemical occurrence and individual behavior such as food consumption. When data are sufficient to estimate statistical distributions, a full probabilistic exposure assessment can be conducted by combining distributions of chemical occurrence and individual behaviors, integrating variability and uncertainty [20].Regarding dietary exposures to mixtures, a Bayesian approach has been developed to model exposures profiles of the French population to a large range of pesticides and to extract the main mixtures [101]. Those mixtures, based on observed exposures, were then studied for their toxicity [102]. The Monte Carlo Risk Assessment (MCRA) web based software system, integrates European databases and probabilistic models and makes it possible to assess dietary exposure to single substance or cumulative exposure for chemical families [103]. In the framework of the European project EUROMIX (, developments are in progress to give MCRA high-throughput capacities. Combined probabilistic exposures to a wide range of chemical will be available, the mixture selection method based on exposure will be implemented and the software will be linked to European models assessing exposures from cosmetics, air, dust, and to pesticides for residents, bystanders and workers to aggregate exposure from various sources.

Biomonitoring data and internal exposure assessment

Biomonitoring consists in assessing human exposure to chemicals by measuring the chemicals or their metabolites in human tissues or specimens, such as blood or urine (NRC, 2006). Biomonitoring data thus refers to analytical measurements of biomarkers in such body tissues or products [104]. Biomarkers can be any substance, structure or process that indicates an exposure, a susceptibility to disease, or an already ongoing disease process. Consequently, three types of biomonitoring data can be defined: biomarkers of internal exposure (e.g., quantity of substance per gram of fat in blood), biomarkers of biological susceptibility (e.g., cytochrome P450 polymorphism) and biomarkers of effect (e.g., acetylcholine esterase inhibition). We focus here on biomarkers of internal exposure. As highlighted in [104], biomonitoring data has many advantages for exposure assessment. First, it integrates exposure from all sources and routes, and provides important information for risk assessment in case of aggregated exposure. Second, it takes into account the accumulation of the chemical in the body due to successive or long external exposure. For example, body burden (the total quantity of a chemical in the body) is considered as a very reliable way to assess the risk related to chemicals known to be slowly eliminated from the body [105]. Third, the collection of serial biomonitoring samples over an extended period of time can provide information regarding variability and trends in exposure. Fourth, it represents direct measurements of the dose of the chemical substance that is really taken up from the environment i.e. internal dose and therefore integrates bioavailability (the fraction of exposure dose that really enters the body) [106].

In recent years, advances in analytic methods have allowed the measurement of more chemicals, at lower concentrations, using smaller samples of blood or urine. As a result, biomonitoring has become more widely used in public health research and risk assessment. For example, the US implemented the NHANES which samples each year biomarkers for 265 chemicals in a nationally representative sample of about 5,000 persons. Many European regions or countries and China, Japan, South Korea or Canada have also implemented biomonitoring programs [107]. These programs provide data relevant for an estimation of the exposure for many chemicals.

High-throughput toxicokinetics (HTTK)

We have seen above how HTS can provide in vitro data linking cellular concentration to effects, and how high-throughput human exposure assessment is evolving. To make the link and proceed to high throughput risk assessment, we need to link external exposures to target cell concentrations in vivo. That is where toxicokinetic (TK) models enter the play. TK models can be defined as “mathematical descriptions simulating the relationship between external exposure level and chemical concentration in biological matrices over time[108]. They describe the absorption, distribution in the body, metabolism and excretion (ADME) of chemicals and their metabolites. They vary in level of complexity and can be classified into two main categories: data based (non-compartmental or compartmental) models and predictive physiologically based toxicokinetic (PBTK) models. In the context of high-throughput analyses, the later are mostly used, since fitting data based models is data- and time-consuming.

Recently, high throughput TK (HTTK) methods have been proposed for in vitro to in vivo extrapolation [109]. Wambaugh et al.[110], developed and distribute an R software package called “httk” (see in which four PBTK models (from simple to complex) are proposed. They are designed to be easily parameterized using high-throughput in vitro data or structure-derived physico-chemical properties and species-specific physiological data. ADME data are already provided by the package for 349 chemicals (arguably a few). This package also allows to perform Monte Carlo sampling and reverse dosimetry. Reverse dosimetry uses toxicokinetic modeling to estimate intake doses or external environmental concentrations based on a measured tissue concentrations (biomarkers of internal exposure) [90, 111]. It has already been used for exposure assessment of a few substances [112, 113] and has been recently proposed in the context of HTE assessment [98].

The in vivo predictive ability of HTTK methods is an issue. In drug development, HTTK methods have been shown as able to estimate therapeutic doses for clinical studies on the order of values measured in clinical trials [114]. For environmental contaminants, no clinical trials are conceivable and validation of the approach for humans is challenging. It has been shown to work reasonably well for rats on a small sample of 59 chemicals [115]. However, the need for chemical-specific analytical chemistry methods to inform ADME, and in particular metabolism, make HTTK slower than bioactivity HTS or HTE.

High-throughput risk assessment (HTRA)

High-throughput toxicity and exposure assessments logically feed into HTRA. The preoccupation with fast risk assessment method is not new. Faced with a large number of chemicals to evaluate, the California EPA implemented expedited risk assessment in the early 90s [116]. The procedure was not fully computerized, however. Nowadays, HTRA screening methods can automatically incorporate values from exposure datasets and toxicity testing databases to make a risk-based prioritization of potentially harmful chemicals [117]. A limitation of the works published so far is that they directly relate toxicity values observed in cell cultures to blood or plasma concentrations inferred from exposure. Cellular effects do not necessarily translate immediately or to the same extent into organ effect and even less into human pathologies. More sophisticated dose-response modeling is therefore required, and the adverse outcome pathways (AOP) framework is actively researched in that perspective [118].

In our current work [119, 120], we are trying to improve on this and propose to couple ExpoCast data, pharmacokinetic modeling, and systems biology modeling to predict the effects of mixtures or aromatase inhibitors on the menstrual cycle (figure 2). The method is able to predict the endocrine disrupting potential of millions of mixtures involving approximately 250 chemicals, but the computational burden is high and beyond the capabilities of most potential users at the moment.

Limitations and challenges

As toxicology is moving to another level of development and accuracy, HT modern technologies are not problem-free and the path towards the target is not exempt of challenges. HT for exposure assessment, as any high dimensional process, has some limitations which will have to be tackled in the future. In this section, we discuss the limitations of HT methods in five major areas and some possible solutions.

Experimental noise

Due to the setting of in vitro techniques, most HTS assays are subject to a non-negligible level of “experimental noise”. Even a non-exhaustive list of factors contributing to this noise is rather long: non-physiological conditions, low cell density and diversity, lack of homeostasis, insufficient supply in oxygen [30], local dosimetry, critical windows of sensitivity, genetic variability [50], confounding stressors [121, 122] etc. Moreover, the physical properties of solvents and tested chemicals adds another hurdle: the choice of dimethyl sulfoxide (DMSO) as a solvent for chemicals storage has undeniable advantages (ability to dissolve both polar and non-polar compounds), but also suffers from a number of weaknesses (unknown interactions with the tested chemicals, risk of water absorption, lysis of chemicals [28], and incapacity of testing volatile environmental pollutants [123]). However, some of those experimental biases can be solved by rigorous quality control [6]. The remaining uncertainty attached to the data collected requires in any case sound statistical analyses.

Miniaturization and the robustness of data

HTS heavily relies on robotics and miniaturization but is not devoid of problems despite the gains in time and cost it offers. However, throughput increase [124] (leading to ultra-HTS able to conduct 100,000 assays per day [125]) is yielding large but poorly informative data sets where quality is sacrificed for the sake of quantity. The major obstacles facing these screening formats, are their high propensity to produce both type 1 (false positives) and type 2 (false negatives) errors [56]. Fortunately, those can be alleviated by testing chemicals at multiple concentrations to derive concentration-response relationships which are more robust and indicative of causality [5].

In vitro to in vivo extrapolation (IVIVE)

A criticism of the current use of HT in vitro data is the lack of sophistication of their IVIVE approaches. They tend to disregard the role of tissues and organs in the development of toxic response. Dealing with toxicity as if it were only cell based pays no heed to the space-time framework of its mechanisms: for example, metabolism may take place in different tissues [48] and effects such as cancer can have a longer timescale (months, years) than the duration of an in vitro experiment [126]. Nevertheless, advances in in vitro testing are being made daily. In liver-cell based assays, for example, three-dimensional systems now include different types of cells [127], moving the focus from the study of cells to the study of reconstructed tissues [128]. Meanwhile, in the same spirit, but with computational tools, different groups are developing “virtual livers” that integrate anatomical physiological and biochemical information [42, 129].

Other improvements can be made with better pharmacokinetic data and models for quantitative IVIVE, for example using PBPK modeling [130, 131]. ADME processes are among the main missing ingredient in current in vitro experiments [132], but they can be predicted by PBPK computational models [8]. This approach is becoming more popular in both pharmaceutical and toxicological research [133, 134].

HTE assessment

The necessary integration of a high number of chemicals and exposure sources tends to compromise the quality and accuracy of the results. The collection of a wide range of representative and adapted data is difficult and time-consuming. Models should also be tested on different dataset and cases, which is not always possible. Moreover, uncertainty and variability (between individuals, geographic areas, etc.) must be addressed in every exposure and risk assessment to interpret results according to their degree of confidence [135].


In spite of all advantages of HTS, it still lacks a well defined validation protocol. In parallel to granting HTS methods the power to contribute to regulatory decisions, there should be a formal process to evaluate their reliability, relevance, and fitness for purpose. However, the current process for in vitro tests validation, of high quality to assure equivalent or better health protection than current procedures, runs contrary to the key advantages of HTS as it is costly, time consuming, and low throughput [50]: high-throughput validation is in order!


Even through a “golden era” of toxicology may be approaching [123], it is reasonable not to expect a quick and dramatic shift in strategies [34]. The fact that accurate, mechanistic, and high-throughput in vitro and in silico tools are progressively replacing traditional in vivo testing [136], is consolidating the belief that solving biological problems requires the help of mathematics and informatics [121]. However, fast tests are typically simplified and data quality and relevance is difficult to maintain. Some data (e.g., on the integration of effects at the whole human body level) are completely lacking and models must be used instead. The validation of those models (for exposure or effect), in turn, is difficult to perform in the absence of data... Therefore, during this mid-term transition of the techniques, a wise choice might be to integrate in vitro, in silico and in vivo methods [43], within “Integrated Testing Strategies” [48], a plan requiring the combined efforts of governmental and non-governmental organizations [137]. Within that plan, well established quantitative IVIVE processes are a necessity [110]. Toxicology is focusing on biological pathways [7] in which HTS data can be incorporated [118] to link adverse outcomes to environmental stressors through well defined mode of actions [127]. For exposure assessment, even though better methods and data are available to estimate the risks related to a wide range of substances, further efforts must be made to address new challenges such as accounting for mixtures effects, all sources and routes of exposures and reducing uncertainty. Exposure and hazard assessments must be improved jointly for better risk assessments.

Acknowledgements and disclosures

This publication reflects only the author's views and neither the IMI JU nor EFPIA nor the European Commission are liable for any use that may be made of the information contained therein.

Funding: The research leading to these results has received funding from the Innovative Medicines Initiative Joint Undertaking, under Grant Agreement number 115439 (StemBANCC), resources of which are composed of financial contribution from the European Union Seventh Framework Programme (FP7/2007-2013) and EFPIA companies in kind contribution.

Conflicts of interest: none.