John Libbey Eurotext

Environnement, Risques & Santé


High-throughput methods for toxicology and health risk assessment Volume 16, issue 1, January-February 2017


  • Figure 1
  • Figure 2


1 Sorbonne universités
UTC, UMR 7338, Génie biologique
Centre de recherche
Rue Personne de Roberval
60200 Compiègne
Risk Assessment Department
14, rue Pierre et Marie Curie
94701 Maison Alfort Cedex
Parc ALATA, BP2 60550 Verneuil en Halatte
* Reprints
  • Key words: toxicology, risk assessment, computational biology, in vitro technique, environmental exposure
  • DOI : 10.1684/ers.2016.0943
  • Page(s) : 44-58
  • Published in: 2017

Toxicology is changing its experimental approaches from animal testing to less expensive, more ethical and relevant methods. From the beginning of this century, various regulations and research programs on both sides of the Atlantic have pushed and contributed to this change. Modern toxicology relies on two main components: in vitro testing and in silico analyses. Toxicology has also entered a world of “big data” production, switching from a low-throughput to a high-throughput mode of screening. Complementary to the assessment of toxicological impact, a large effort has also been made to evaluate human exposure to chemicals: new human and field surveys, analytical measurements, computational capacities, and the use of mathematical modeling have open new possibilities for exposure assessment. Accounting for several sources and routes of exposures, estimating combined exposure to mixtures, integrating exposure variability, and simulating long-term exposure are new challenges on their way to be solved. In addition, biomonitoring data, internal exposure biomarkers, and toxicokinetics are all adding to the list of tools and techniques helping to link the pieces of the yet incomplete puzzle of high-throughput risk assessment. Yet, high-throughput applications in toxicology have been criticized, for their inadequate representation of the biological interactions at the organism level, for the experimental noise they suffer from, for the complexity of the in vivo to in vitro extrapolation and for their yet undefined validation protocols. We propose here a brief panorama of those developments.