Tag Archives: alternatives to animal testing

Artificial intelligence (AI) brings together International Telecommunications Union (ITU) and World Health Organization (WHO) and AI outperforms animal testing

Following on my May 11, 2018 posting about the International Telecommunications Union (ITU) and the 2018 AI for Good Global Summit in mid- May, there’s an announcement. My other bit of AI news concerns animal testing.

Leveraging the power of AI for health

A July 24, 2018 ITU press release (a shorter version was received via email) announces a joint initiative focused on improving health,

Two United Nations specialized agencies are joining forces to expand the use of artificial intelligence (AI) in the health sector to a global scale, and to leverage the power of AI to advance health for all worldwide. The International Telecommunication Union (ITU) and the World Health Organization (WHO) will work together through the newly established ITU Focus Group on AI for Health to develop an international “AI for health” standards framework and to identify use cases of AI in the health sector that can be scaled-up for global impact. The group is open to all interested parties.

“AI could help patients to assess their symptoms, enable medical professionals in underserved areas to focus on critical cases, and save great numbers of lives in emergencies by delivering medical diagnoses to hospitals before patients arrive to be treated,” said ITU Secretary-General Houlin Zhao. “ITU and WHO plan to ensure that such capabilities are available worldwide for the benefit of everyone, everywhere.”

The demand for such a platform was first identified by participants of the second AI for Good Global Summit held in Geneva, 15-17 May 2018. During the summit, AI and the health sector were recognized as a very promising combination, and it was announced that AI-powered technologies such as skin disease recognition and diagnostic applications based on symptom questions could be deployed on six billion smartphones by 2021.

The ITU Focus Group on AI for Health is coordinated through ITU’s Telecommunications Standardization Sector – which works with ITU’s 193 Member States and more than 800 industry and academic members to establish global standards for emerging ICT innovations. It will lead an intensive two-year analysis of international standardization opportunities towards delivery of a benchmarking framework of international standards and recommendations by ITU and WHO for the use of AI in the health sector.

“I believe the subject of AI for health is both important and useful for advancing health for all,” said WHO Director-General Tedros Adhanom Ghebreyesus.

The ITU Focus Group on AI for Health will also engage researchers, engineers, practitioners, entrepreneurs and policy makers to develop guidance documents for national administrations, to steer the creation of policies that ensure the safe, appropriate use of AI in the health sector.

“1.3 billion people have a mobile phone and we can use this technology to provide AI-powered health data analytics to people with limited or no access to medical care. AI can enhance health by improving medical diagnostics and associated health intervention decisions on a global scale,” said Thomas Wiegand, ITU Focus Group on AI for Health Chairman, and Executive Director of the Fraunhofer Heinrich Hertz Institute, as well as professor at TU Berlin.

He added, “The health sector is in many countries among the largest economic sectors or one of the fastest-growing, signalling a particularly timely need for international standardization of the convergence of AI and health.”

Data analytics are certain to form a large part of the ITU focus group’s work. AI systems are proving increasingly adept at interpreting laboratory results and medical imagery and extracting diagnostically relevant information from text or complex sensor streams.

As part of this, the ITU Focus Group for AI for Health will also produce an assessment framework to standardize the evaluation and validation of AI algorithms — including the identification of structured and normalized data to train AI algorithms. It will develop open benchmarks with the aim of these becoming international standards.

The ITU Focus Group for AI for Health will report to the ITU standardization expert group for multimedia, Study Group 16.

I got curious about Study Group 16 (from the Study Group 16 at a glance webpage),

Study Group 16 leads ITU’s standardization work on multimedia coding, systems and applications, including the coordination of related studies across the various ITU-T SGs. It is also the lead study group on ubiquitous and Internet of Things (IoT) applications; telecommunication/ICT accessibility for persons with disabilities; intelligent transport system (ITS) communications; e-health; and Internet Protocol television (IPTV).

Multimedia is at the core of the most recent advances in information and communication technologies (ICTs) – especially when we consider that most innovation today is agnostic of the transport and network layers, focusing rather on the higher OSI model layers.

SG16 is active in all aspects of multimedia standardization, including terminals, architecture, protocols, security, mobility, interworking and quality of service (QoS). It focuses its studies on telepresence and conferencing systems; IPTV; digital signage; speech, audio and visual coding; network signal processing; PSTN modems and interfaces; facsimile terminals; and ICT accessibility.

I wonder which group deals with artificial intelligence and, possibly, robots.

Chemical testing without animals

Thomas Hartung, professor of environmental health and engineering at Johns Hopkins University (US), describes in his July 25, 2018 essay (written for The Conversation) on phys.org the situation where chemical testing is concerned,

Most consumers would be dismayed with how little we know about the majority of chemicals. Only 3 percent of industrial chemicals – mostly drugs and pesticides – are comprehensively tested. Most of the 80,000 to 140,000 chemicals in consumer products have not been tested at all or just examined superficially to see what harm they may do locally, at the site of contact and at extremely high doses.

I am a physician and former head of the European Center for the Validation of Alternative Methods of the European Commission (2002-2008), and I am dedicated to finding faster, cheaper and more accurate methods of testing the safety of chemicals. To that end, I now lead a new program at Johns Hopkins University to revamp the safety sciences.

As part of this effort, we have now developed a computer method of testing chemicals that could save more than a US$1 billion annually and more than 2 million animals. Especially in times where the government is rolling back regulations on the chemical industry, new methods to identify dangerous substances are critical for human and environmental health.

Having written on the topic of alternatives to animal testing on a number of occasions (my December 26, 2014 posting provides an overview of sorts), I was particularly interested to see this in Hartung’s July 25, 2018 essay on The Conversation (Note: Links have been removed),

Following the vision of Toxicology for the 21st Century, a movement led by U.S. agencies to revamp safety testing, important work was carried out by my Ph.D. student Tom Luechtefeld at the Johns Hopkins Center for Alternatives to Animal Testing. Teaming up with Underwriters Laboratories, we have now leveraged an expanded database and machine learning to predict toxic properties. As we report in the journal Toxicological Sciences, we developed a novel algorithm and database for analyzing chemicals and determining their toxicity – what we call read-across structure activity relationship, RASAR.

This graphic reveals a small part of the chemical universe. Each dot represents a different chemical. Chemicals that are close together have similar structures and often properties. Thomas Hartung, CC BY-SA

To do this, we first created an enormous database with 10 million chemical structures by adding more public databases filled with chemical data, which, if you crunch the numbers, represent 50 trillion pairs of chemicals. A supercomputer then created a map of the chemical universe, in which chemicals are positioned close together if they share many structures in common and far where they don’t. Most of the time, any molecule close to a toxic molecule is also dangerous. Even more likely if many toxic substances are close, harmless substances are far. Any substance can now be analyzed by placing it into this map.

If this sounds simple, it’s not. It requires half a billion mathematical calculations per chemical to see where it fits. The chemical neighborhood focuses on 74 characteristics which are used to predict the properties of a substance. Using the properties of the neighboring chemicals, we can predict whether an untested chemical is hazardous. For example, for predicting whether a chemical will cause eye irritation, our computer program not only uses information from similar chemicals, which were tested on rabbit eyes, but also information for skin irritation. This is because what typically irritates the skin also harms the eye.

How well does the computer identify toxic chemicals?

This method will be used for new untested substances. However, if you do this for chemicals for which you actually have data, and compare prediction with reality, you can test how well this prediction works. We did this for 48,000 chemicals that were well characterized for at least one aspect of toxicity, and we found the toxic substances in 89 percent of cases.

This is clearly more accurate that the corresponding animal tests which only yield the correct answer 70 percent of the time. The RASAR shall now be formally validated by an interagency committee of 16 U.S. agencies, including the EPA [Environmental Protection Agency] and FDA [Food and Drug Administration], that will challenge our computer program with chemicals for which the outcome is unknown. This is a prerequisite for acceptance and use in many countries and industries.

The potential is enormous: The RASAR approach is in essence based on chemical data that was registered for the 2010 and 2013 REACH [Registration, Evaluation, Authorizations and Restriction of Chemicals] deadlines [in Europe]. If our estimates are correct and chemical producers would have not registered chemicals after 2013, and instead used our RASAR program, we would have saved 2.8 million animals and $490 million in testing costs – and received more reliable data. We have to admit that this is a very theoretical calculation, but it shows how valuable this approach could be for other regulatory programs and safety assessments.

In the future, a chemist could check RASAR before even synthesizing their next chemical to check whether the new structure will have problems. Or a product developer can pick alternatives to toxic substances to use in their products. This is a powerful technology, which is only starting to show all its potential.

It’s been my experience that these claims having led a movement (Toxicology for the 21st Century) are often contested with many others competing for the title of ‘leader’ or ‘first’. That said, this RASAR approach seems very exciting, especially in light of the skepticism about limiting and/or making animal testing unnecessary noted in my December 26, 2014 posting.it was from someone I thought knew better.

Here’s a link to and a citation for the paper mentioned in Hartung’s essay,

Machine learning of toxicological big data enables read-across structure activity relationships (RASAR) outperforming animal test reproducibility by Thomas Luechtefeld, Dan Marsh, Craig Rowlands, Thomas Hartung. Toxicological Sciences, kfy152, https://doi.org/10.1093/toxsci/kfy152 Published: 11 July 2018

This paper is open access.

Body-on-a-chip (10 organs)

Also known as human-on-a-chip, the 10-organ body-on-a-chip was being discussed at the 9th World Congress on Alternatives to Animal Testing in the Life Sciences in 2014 in Prague, Czech Republic (see this July 1, 2015 posting for more). At the time, scientists were predicting success at achieving their goal of 10 organs on-a-chip in 2017 (the best at the time was four organs). Only a few months past that deadline, scientists from the Massachusetts Institute of Technology (MIT) seem to have announced a ’10 organ chip’ in a March 14, 2018 news item on ScienceDaily,

MIT engineers have developed new technology that could be used to evaluate new drugs and detect possible side effects before the drugs are tested in humans. Using a microfluidic platform that connects engineered tissues from up to 10 organs, the researchers can accurately replicate human organ interactions for weeks at a time, allowing them to measure the effects of drugs on different parts of the body.

Such a system could reveal, for example, whether a drug that is intended to treat one organ will have adverse effects on another.

A March 14, 2018 MIT news release (also on EurekAlert), which originated the news item, expands on the theme,

“Some of these effects are really hard to predict from animal models because the situations that lead to them are idiosyncratic,” says Linda Griffith, the School of Engineering Professor of Teaching Innovation, a professor of biological engineering and mechanical engineering, and one of the senior authors of the study. “With our chip, you can distribute a drug and then look for the effects on other tissues, and measure the exposure and how it is metabolized.”

These chips could also be used to evaluate antibody drugs and other immunotherapies, which are difficult to test thoroughly in animals because they are designed to interact with the human immune system.

David Trumper, an MIT professor of mechanical engineering, and Murat Cirit, a research scientist in the Department of Biological Engineering, are also senior authors of the paper, which appears in the journal Scientific Reports. The paper’s lead authors are former MIT postdocs Collin Edington and Wen Li Kelly Chen.

Modeling organs

When developing a new drug, researchers identify drug targets based on what they know about the biology of the disease, and then create compounds that affect those targets. Preclinical testing in animals can offer information about a drug’s safety and effectiveness before human testing begins, but those tests may not reveal potential side effects, Griffith says. Furthermore, drugs that work in animals often fail in human trials.

“Animals do not represent people in all the facets that you need to develop drugs and understand disease,” Griffith says. “That is becoming more and more apparent as we look across all kinds of drugs.”

Complications can also arise due to variability among individual patients, including their genetic background, environmental influences, lifestyles, and other drugs they may be taking. “A lot of the time you don’t see problems with a drug, particularly something that might be widely prescribed, until it goes on the market,” Griffith says.

As part of a project spearheaded by the Defense Advanced Research Projects Agency (DARPA), Griffith and her colleagues decided to pursue a technology that they call a “physiome on a chip,” which they believe could offer a way to model potential drug effects more accurately and rapidly. To achieve this, the researchers needed new equipment — a platform that would allow tissues to grow and interact with each other — as well as engineered tissue that would accurately mimic the functions of human organs.

Before this project was launched, no one had succeeded in connecting more than a few different tissue types on a platform. Furthermore, most researchers working on this kind of chip were working with closed microfluidic systems, which allow fluid to flow in and out but do not offer an easy way to manipulate what is happening inside the chip. These systems also require external pumps.

The MIT team decided to create an open system, which essentially removes the lid and makes it easier to manipulate the system and remove samples for analysis. Their system, adapted from technology they previously developed and commercialized through U.K.-based CN BioInnovations, also incorporates several on-board pumps that can control the flow of liquid between the “organs,” replicating the circulation of blood, immune cells, and proteins through the human body. The pumps also allow larger engineered tissues, for example tumors within an organ, to be evaluated.

Complex interactions

The researchers created several versions of their chip, linking up to 10 organ types: liver, lung, gut, endometrium, brain, heart, pancreas, kidney, skin, and skeletal muscle. Each “organ” consists of clusters of 1 million to 2 million cells. These tissues don’t replicate the entire organ, but they do perform many of its important functions. Significantly, most of the tissues come directly from patient samples rather than from cell lines that have been developed for lab use. These so-called “primary cells” are more difficult to work with but offer a more representative model of organ function, Griffith says.

Using this system, the researchers showed that they could deliver a drug to the gastrointestinal tissue, mimicking oral ingestion of a drug, and then observe as the drug was transported to other tissues and metabolized. They could measure where the drugs went, the effects of the drugs on different tissues, and how the drugs were broken down. In a related publication, the researchers modeled how drugs can cause unexpected stress on the liver by making the gastrointestinal tract “leaky,” allowing bacteria to enter the bloodstream and produce inflammation in the liver.

Kevin Healy, a professor of bioengineering and materials science and engineering at the University of California at Berkeley, says that this kind of system holds great potential for accurate prediction of complex adverse drug reactions.

“While microphysiological systems (MPS) featuring single organs can be of great use for both pharmaceutical testing and basic organ-level studies, the huge potential of MPS technology is revealed by connecting multiple organ chips in an integrated system for in vitro pharmacology. This study beautifully illustrates that multi-MPS “physiome-on-a-chip” approaches, which combine the genetic background of human cells with physiologically relevant tissue-to-media volumes, allow accurate prediction of drug pharmacokinetics and drug absorption, distribution, metabolism, and excretion,” says Healy, who was not involved in the research.

Griffith believes that the most immediate applications for this technology involve modeling two to four organs. Her lab is now developing a model system for Parkinson’s disease that includes brain, liver, and gastrointestinal tissue, which she plans to use to investigate the hypothesis that bacteria found in the gut can influence the development of Parkinson’s disease.

Other applications include modeling tumors that metastasize to other parts of the body, she says.

“An advantage of our platform is that we can scale it up or down and accommodate a lot of different configurations,” Griffith says. “I think the field is going to go through a transition where we start to get more information out of a three-organ or four-organ system, and it will start to become cost-competitive because the information you’re getting is so much more valuable.”

The research was funded by the U.S. Army Research Office and DARPA.

Caption: MIT engineers have developed new technology that could be used to evaluate new drugs and detect possible side effects before the drugs are tested in humans. Using a microfluidic platform that connects engineered tissues from up to 10 organs, the researchers can accurately replicate human organ interactions for weeks at a time, allowing them to measure the effects of drugs on different parts of the body. Credit: Felice Frankel

Here’s a link to and a citation for the paper,

Interconnected Microphysiological Systems for Quantitative Biology and Pharmacology Studies by Collin D. Edington, Wen Li Kelly Chen, Emily Geishecker, Timothy Kassis, Luis R. Soenksen, Brij M. Bhushan, Duncan Freake, Jared Kirschner, Christian Maass, Nikolaos Tsamandouras, Jorge Valdez, Christi D. Cook, Tom Parent, Stephen Snyder, Jiajie Yu, Emily Suter, Michael Shockley, Jason Velazquez, Jeremy J. Velazquez, Linda Stockdale, Julia P. Papps, Iris Lee, Nicholas Vann, Mario Gamboa, Matthew E. LaBarge, Zhe Zhong, Xin Wang, Laurie A. Boyer, Douglas A. Lauffenburger, Rebecca L. Carrier, Catherine Communal, Steven R. Tannenbaum, Cynthia L. Stokes, David J. Hughes, Gaurav Rohatgi, David L. Trumper, Murat Cirit, Linda G. Griffith. Scientific Reports, 2018; 8 (1) DOI: 10.1038/s41598-018-22749-0 Published online:

This paper which describes testing for four-, seven-, and ten-organs-on-a-chip, is open access. From the paper’s Discussion,

In summary, we have demonstrated a generalizable approach to linking MPSs [microphysiological systems] within a fluidic platform to create a physiome-on-a-chip approach capable of generating complex molecular distribution profiles for advanced drug discovery applications. This adaptable, reusable system has unique and complementary advantages to existing microfluidic and PDMS-based approaches, especially for applications involving high logD substances (drugs and hormones), those requiring precise and flexible control over inter-MPS flow partitioning and drug distribution, and those requiring long-term (weeks) culture with reliable fluidic and sampling operation. We anticipate this platform can be applied to a wide range of problems in disease modeling and pre-clinical drug development, especially for tractable lower-order (2–4) interactions.

Congratulations to the researchers!

More from PETA (People for the Ethical Treatment of Animals) about nanomaterials and lungs

Science progress by increments. First, there was this April 27, 2016 post featuring some recent work by the organization, People for the Ethical Treatment of Animals (PETA) focused on nanomaterials and lungs. Now approximately one month later, PETA announces a new paper on the topic according to a May 26, 2016 news item on phys.org,

A scientist from the PETA International Science Consortium Ltd. is the lead author of a review on pulmonary fibrosis that results from inhaling nanomaterials, which has been published in Archives of Toxicology. The coauthors are scientists from Health Canada, West Virginia University, and the University of Fribourg in Switzerland.

A May 26, 2016 PETA news release on EurekAlert, which originated the news item, provides more detail (Note: Links have been removed),

The increasing use of nanomaterials in consumer goods such as paint, building materials, and food products has increased the likelihood of human exposure. Inhalation is one of the most prominent routes by which exposure can occur, and because inhalation of nanomaterials may be linked to lung problems such as pulmonary fibrosis, testing is conducted to assess the safety of these materials.

The review is one part of the proceedings of a 2015 workshop [mentioned in my Sept. 3, 2015 posting] organized by the PETA International Science Consortium, at which scientists discussed recommendations for designing an in vitro approach to assessing the toxicity of nanomaterials in the human lung. The workshop also produced another report that was recently published in Archives of Toxicology (Clippinger et al. 2016) and a review published in Particle and Fibre Toxicology (Polk et al. 2016) [mentioned in my April 27, 2016 posting] on exposing nanomaterials to cells grown in vitro.

The expert recommendations proposed at the workshop are currently being used to develop an in vitro system to predict the development of lung fibrosis in humans, which is being funded by the Science Consortium.

“International experts who took part in last year’s workshop have advanced the understanding and application of non-animal methods of studying nanomaterial effects in the lung,” says Dr. Monita Sharma, nanotoxicology specialist at the Consortium and lead author of the review in Archives of Toxicology. “Good science is leading the way toward more humane testing of nanomaterials, which, in turn, will lead to better protection of human health.”

Here’s a link to and a citation for the paper,

Predicting pulmonary fibrosis in humans after exposure to multi-walled carbon nanotubes (MWCNTs) by Monita Sharma, Jake Nikota, Sabina Halappanavar, Vincent Castranova, Barbara Rothen-Rutishauser, Amy J. Clippinger. Archives of Toxicology pp 1-18 DOI: 10.1007/s00204-016-1742-7 First online: 23 May 2016

This paper is behind a paywall.

Call for proposals to create in vitro inhalation tests for nanomaterial toxicity

I got an email announcement (March 17, 2015) which has acted as a spur to my desire to follow up on my Deux Seurats: one (was an artist) and one (is an inquiry into scientifically sound alternatives to animal testing) of December 26, 2014 post.

First, here’s a March 16, 2015 PETA (People for the Ethical Treatment of Animals) International Science Consortium (PISC) press release which describes a practical and scientific initiative for finding alternatives to animal testing,

Today, the PETA International Science Consortium Ltd. put out a request for proposals (RFP) to identify facilities that can develop an in vitro test that, when used in an integrated approach, has the potential to replace the current test conducted on animals to assess the inhalation toxicity of nanomaterials.

The RFP follows a workshop, organized by the Science Consortium and held at U.S. Environmental Protection Agency headquarters in Washington, D.C., that brought together scientific experts from government, industry, academia, and nonprofit organizations from around the world. The goal of the workshop was to make specific recommendations on the design of this in vitro test, including cell types, endpoints, exposure systems, and dosimetry considerations required to develop the in vitro model.

Based on the recommendations from the workshop, the RFP seeks facilities to develop a method that can assess the induction of pulmonary fibrosis in cells co-cultured at the air-liquid interface following exposure to aerosolized multi-walled carbon nanotubes. The Science Consortium will fund this work.

“For both scientific and ethical reasons, there is interest in developing a non-animal method that is faster, cheaper, and more relevant to the human situation,” says the Science Consortium’s Dr. Amy Clippinger.

The long-term vision is to include this in vitro test in a battery of in silico and in vitro assays that can be used in an integrated testing strategy, providing comprehensive information on biological endpoints relevant to inhalation exposure to nanomaterials to be used in the hazard ranking of substances in the risk-assessment process.

The request for proposals can be found here. The proposal deadline is May 29, 2015.

For more information, please visit PISCLTD.org.uk.

I see the research focus is on multi-walled carbon nanotubes. This makes sense since research has shown that long fibres act like the asbestos fibres they resemble when found in the lung.

Second, I’m hoping to follow up my Deux Seurats piece soon with the tentatively titled, The trouble with mice and … .

Deux Seurats: one (was an artist) and one (is an inquiry into scientifically sound alternatives to animal testing)

It must have been a moment of artistic madness which led to naming one of the European Union’s biggest projects dedicated to finding alternatives to animal testing, SEURAT-1. (Note: [1] All references used for this post are listed at the end. [2] There is a full disclosure statement after the references.)

Georges Seurat, a French post-impressionist painter, left no record that he was ever concerned with animal testing although he could be considered the ‘patron saint of pixels’ due to paintings which consist of dots rather than strokes.

Le Cirque (1891) by Georges Seurat in the Musée d'Orsay [Public domain or Public domain], via Wikimedia Commons; The Yorck Project: 10.000 Meisterwerke der Malerei. DVD-ROM, 2002. ISBN 3936122202. Distributed by DIRECTMEDIA Publishing GmbH; downloaded from https://commons.wikimedia.org/wiki/File:Georges_Seurat_019.jpg

Le Cirque (1891) by Georges Seurat in the Musée d’Orsay [Public domain or Public domain], via Wikimedia Commons; The Yorck Project: 10.000 Meisterwerke der Malerei. DVD-ROM, 2002. ISBN 3936122202. Distributed by DIRECTMEDIA Publishing GmbH; downloaded from https://commons.wikimedia.org/wiki/File:Georges_Seurat_019.jpg

Still, the idea of painstakingly constructing a picture dot by dot seems curiously similar to the scientific process where years of incremental gains in knowledge and understanding lead to new perspectives on the world around us. In this case, the change of perspective concerns the use of animals in testing for toxicological effects of medications, cosmetics, and other chemical goods intended for humans.

Animal testing dates back to back to the third and fourth centuries BCE (before the common era) although the father of vivisection, Galen, a Greek physician, doesn’t make an appearance until 2nd-century CE in Rome. More recently, we have an Arab physician, Avenzoar (Ibn Zuhr), in 12th-century Moorish Spain to thank for introducing animal experimentation as a means of testing surgical procedures.

The millenia-old practice of animal testing, surgical or otherwise, has presented a cruel conundrum. The tests have been our best attempt to save human lives and reduce human misery, albeit, at the cost of the animals used in the tests.

Social discomfort over animal-testing is rising internationally and thankfully, it looks like animal testing is in decline as alternatives and improvements (animal physiology is not perfectly equivalent to human physiology) are adopted. Alternatives and improvements have made possible actions such as the

  • European Union’s (EU) March 2013 ban on the sale of animal-tested cosmetics from anywhere in the world; there was an earlier 2009 ban on the sale of animal-tested cosmetics from anywhere in the EU,
  • China’s July 2014 announcement that animal-testing for cosmetics produced domestically is no longer required,
  • Israel’s 2013 ban on importing and marketing of cosmetics tested on animals,
  • India’s bans on cruel animal testing in India’s laboratories (2013) and on importing animal-tested cosmetics (Oct. 2014)

There are also a number of outstanding (as of December 2014) legislative proposals regarding animal-testing and cosmetics in countries such as Australia, Brazil, Taiwan, New Zealand, and the US.

However, cosmetics are only one product type among many, many chemical products. For example, medications, which rely on animal-testing for safety certification. Despite recent victories, the process of dismantling the animal-testing systems in place is massive, complex, and difficult even with support and encouragement from various government agencies, civil society groups, scientists, and various international organizations.

Well-entrenched national and international regulatory frameworks make animal testing mandatory prior to releasing a product into the marketplace. Careful thought, assurances to policy makers and the general public, and confidence that replacement regimes will be equivalent to the old system to the old system of animal-testing are necessary.

Strangely, assuring even sophisticated thinkers can prove surprisingly difficult, David Ropeik, a former Director of Communications for Harvard University’s Center for Risk Analysis and currently an international consultant and speaker on risk analysis, wrote in a Sept. 2014 post for The Big Think about the EU’s 2013 ban on cosmetics testing on animals,

But people use lotions and toothpastes and deodorants and perfumes repeatedly. We  expose ourselves everyday to hundreds of human-made chemicals, and some of those substances, which also fall under the European ban on animal testing for cosmetics, have the potential to do deeper damage, like cancer, or reproductive damage to the developing fetus. And there are no reliable replacement tests for those serious outcomes.

This now-banned animal testing for the systemic risks from repeated exposure to these everyday products was also a source of important information on the health effects of industrial chemicals generally. Results from cosmetic testing become part of the library of what we know about how industrial chemicals might harm us, no matter what products they’re in.

So the European community has eliminated a way for science to study the risk of industrial chemicals…because it feels right to consider the rights of animals. [emphasis mine] We have done what feels right, but in the process, without realizing it, we have made it harder to figure out how to keep ourselves safe.

Ropeik doesn’t substantiate his comment about the EU community acting from ‘feelings’ or discuss how current alternatives are inferior to animal testing or offer data about how this ban has made the earth a more dangerous place for humans. Meanwhile, more jurisdictions are limiting or eliminating testing of cosmetics on animals while an international competition which has already developed new techniques is underway to find yet more alternatives. SEURAT-1 the main European Union project, designed to carry out a set of scientific inquiries to facilitate the transition to animal testing alternatives where possible. It is organized around seven interlinked projects (or borrowing from Georges, seven dots):

  •  SCR&Tox (Stem Cells for Relevant efficient extended and normalized TOXicology): Stem cell differentiation for providing human-based organ specific target cells to assay toxicity pathways in vitro
  • Hepatic Microfluidic Bioreactor (HeMiBio): Developing a hepatic microfluidic bioreactor to mimick the complex structure and function of the human liver (liver-on-a-chip)
  • Detection of endpoints and biomarkers for repeated dose toxicity using in vitro systems (DETECTIVE): Identifying and investigating human biomarkers in cellular models for repeated dose in vitro testing
  • Integrated In Silico Models for the Prediction of Human Repeated Dose Toxicity of COSMetics to Optimise Safety’ (COSMOS): Integrating and delivering of a suite of computational tools to predict the effects of long-term exposure to chemicals in humans based on in silico calculations
  • Predicting long term toxic effects using computer models based on systems characterization of organotypic cultures (NOTOX): Developing systems biological tools for organotypic human cell cultures suitable for long term toxicity testing and the identification and analysis of pathways of toxicological relevance
  • Supporting Integrated Data Analysis and Servicing of Alternative Testing Methods in Toxicology (ToxBank): Data management, cell and tissue banking, selection of “reference compounds” and chemical repository
  • Coordination of projects on new approaches to replace current repeated dose systemic toxicity testing of cosmetics and chemicals (COACH): Cluster level coordinating and support action or this could be called, Administration

As SEURAT-1 nears its sunset date in 2015 (it is a five-year, 50M Euro project started in 2011), there are successes to celebrate. For example, Emma Davies in her article titled, Alternative test data publicly available; ToxBank data warehouse (Sept. 4, 2014 for Chemical Watch) notes that ToxBank, includes data from SEURAT-1’s “gold” standard reference compounds which have documented liver, kidney, and cardio toxicity. As well, data sets from a comprehensive 2012 liver toxicity study supplied by the European Commission’s Joint Research Centre (the EU’s research hub and laboratory) have been added. ToxBank has also negotiated with Open TG-Gates, a Japanese toxicogenomics data resource and with ToxCast and Tox21, two US high-throughput screening programmes to add their data to the ToxBank data warehouse. Meanwhile, the warehouse’s data is publicly available on request.

COSMOS, the other data-oriented member of the SEURAT-1 cluster, should provide a good starting point for in silico studies (computer simulations) as it now boasts information on some 19,000 cosmetics-related substances, including toxicity data for more than 12,000 studies according to Davies’ article, Critical toxicity pathways at heart of Seurat-1 follow on (Sept. 11, 2014 for Chemical Watch).

While we can take Ropeik’s point that animal testing has been an important element in ensuring drug and chemical safety, the move to limit or ban animal testing for cosmetics has been over 50 years in the making and this current wave of regulatory changes has been approached cautiously. There may be some unforeseen consequences both good and bad to these bans on animal testing but to remain mired in the procedures and processes of the past is to deny an improved future for humans and the animals we have used for testing.

References

Pointillism

http://en.wikipedia.org/wiki/Pointillism

History of animal testing

http://en.wikipedia.org/wiki/Alternatives_to_animal_testing

2013 EU ban ban on animal testing for cosmetics

http://www.bbc.com/news/world-europe-21740745

http://ec.europa.eu/consumers/archive/sectors/cosmetics/animal-testing/index_en.htm

More legislation on cosmetics testing

http://en.wikipedia.org/wiki/Testing_cosmetics_on_animals

India ban

http://www.hsi.org/news/press_releases/2014/10/animal-tested-cosmetics-import-ban-india-101414.html

China ban

http://www.care2.com/causes/its-official-china-ends-mandatory-animal-testing-for-cosmetics.html

EU 2013 one year later

http://www.huffingtonpost.com/monica-engebretson/celebrating-the-first-ann_b_4994028.html

David Ropeik’s credentials and resistance to eliminating animal-testing

http://en.wikipedia.org/wiki/David_Ropeik

http://utility.prod.bigthink.com/risk-reason-and-reality/the-ban-on-animal-testing-morally-right-emotionally-appealing-but-dangerous

SEURAT-1

http://www.seurat-1.eu/

Cluster projects

http://www.seurat-1.eu/pages/cluster-projects.php

Emma Davies, Sept. 4, 2014 article (not behind a paywall)

http://chemicalwatch.com/21061/alternative-test-data-publicly-available

Emma Davies, Sept. 11, 2014 article  (behind a paywall)

http://chemicalwatch.com/register?o=21147&productID=1

Reference to cosmetics ban being over 50 years in the making

https://www.nc3rs.org.uk/the-3rs

The principles of the 3Rs (Replacement, Reduction and Refinement) were developed over 50 years ago as a framework for humane animal research.

Johns Hopkins Centre for Alternatives to Animal Testing (CAAT)

Resource list (http://caat.jhsph.edu/resources/) includes (and more):

Full disclosure: (1) SEURAT-1 paid for my flight, lodging, and attendance at WC9, the 9th World Congress on Alternatives and Animal Use in the Life Sciences. (2) I have written about alternatives to animal testing prior to any knowledge of SEURAT-1.

FrogHeart goes to the 9th World Congress on alternatives to animal testing

Also known as ‘Humane Science in the 21st Century’, the 9th World Congress on ‘Alternatives to Animal Testing in the Life Sciences‘ is being held next week (Aug. 24 – 28, 2014) and FrogHeart will be reporting on various aspects of the work. These posts are sponsored. I realize some folks don’t approve of the practice, which seems odd given that all writing, ultimately, is paid for and sponsored in one fashion or another. While direct sponsorship of a piece of writing can make objectivity (such as it is) more of challenge; it is not beyond the realms of possibility. Conversely, salaried writers can also become compromised due to friendships and loyalties built up over the years or, possibly, due to graft.

All of the posts generated as a consequence of the sponsorship will be identified with the sponsoring agency (SEURAT-1).

For anyone who wishes analyze and compare the posts for bias, here are a few pieces written prior to any contact about the congress:

  • Reducing animal testing for nanotoxicity—PETA (People for the Ethical Treatment of Animals) presentation at NanoTox 2014 (April 24, 2014)
  • Nanomaterials, toxicology, and alternatives to animal testing (Aug. 22. 2013)
  • Animal love and nanotechnology (Jan. 12, 2012)
  • Global TV (national edition) and nanotechnology; EPA develops a ‘kinder to animals’ nanomaterials research strategy (Oct. 8, 2009); scroll down 25% of the way)

Should you detect undue bias in any of the sponsored pieces, please do let me know.