1,4-dioxane: The case of the disappearing tumors

Right now, EPA is in the process of conducting “risk evaluations” for existing chemicals in commerce, as mandated by the recently passed Lautenberg Chemical Safety for the 21st Century Act (which amended the original, ineffective Toxic Substances Control Act).

[For a refresher on how this effort fits into the bigger picture of chemical assessments, you can review my infographic.]

So far, the agency has released draft risk evaluations for four chemicals: PV-29, HBCD, 1,4-dioxane, and 1-bromopropane. I’ve been working with my former colleagues at EDF Health to carefully review the drafts for the latter two chemicals.  Unfortunately, as expected, these drafts put out by the Trump EPA have a number of problems, which we’ve detailed in public comments.

For a window into one particularly concerning issue, you can check out a post that I wrote with Dr. Richard Denison on the EDF Health blog, 1,4-dioxane: The case of the disappearing tumors.

Concerning Glyphosate

Apologies for the long blog absence. I’ve been busy PhD-ing (including preparing for and passing my oral general exam!) and working on various side projects.

One of those side projects has been focused on glyphosate. Glyphosate, the active ingredient in Monsanto’s (now owned by Bayer) Roundup, is the most widely used herbicide in the world. First marketed in 1974, its usage skyrocketed after the introduction of “Roundup-ready” (i.e.: Roundup resistant) crops in 1996 and the practice of “green-burndown” (i.e.: using the chemical as a desiccant shortly before harvest) in the mid-2000s. In 2014, global usage was estimated to be 1.8 billion pounds.  

But these staggering statistics are not the only claim to fame for glyphosate. It has also been the subject of intense international regulatory and scientific scrutiny in recent years, for its possible link to cancer. The stakes are high (billions of dollars for Monsanto, related to sales of both the herbicide itself and its line of herbicide-resistant crops), and the conclusions are controversial.

Carcinogenic or not, that is the question.

In 2015, the International Agency on Cancer (IARC) declared that glyphosate was a “probable human carcinogen” (relevant links: explanation of IARC classifications; official summary for glyphosate; IARC webpage with follow-up links). However, that same year, the European Food Safety Authority (EFSA) concluded that “glyphosate is unlikely to pose a carcinogenic hazard to humans, and the evidence does not support classification with regard to its carcinogenic potential.” In 2016, the US Environmental Protection Agency (EPA) determined that glyphosate was “not likely to be carcinogenic to humans at doses relevant for human health risk assessment.”

Ok, so that’s confusing. How did these agencies, all of which are supposed to conduct unbiased reviews of all of the evidence come to such different conclusions? There have been several recent publications that explain these inconsistencies (for example, see here and here). In essence, it boils down to: 1) differences in how the agencies weighed peer-reviewed, publicly available studies (most show adverse health effects) versus unpublished regulatory studies submitted by manufacturers (most do not show adverse health effects); 2) whether the agencies focused on studies of pure glyphosate or the final formulated glyphosate-based product that is used in agricultural applications (which is known to be more toxic); and 3) whether the agencies considered dietary exposures to the general population only or also took into account elevated exposures in occupational scenarios (i.e. individuals who apply glyphosate-based herbicides in agricultural settings).

Meanwhile, as the debate continues… 27 countries (as of November 2018) have decided to move forward with implementing their own bans or restrictions. And, Monsanto/Bayer faces more than 9,000 lawsuits in the US from individuals who link their cancer to the herbicide. (The courts ruled the first case in favor of the plaintiff, though Monsanto is appealing the decision).

My connection

This highly contentious area is outside the topic of my dissertation research, but I got involved because my advisor was a member of the EPA scientific advisory panel that reviewed the agency’s draft assessment of glyphosate in 2016. The panel’s final report raised a number of concerns with EPA’s process and conclusions, including that the agency did not follow its own cancer guidelines and made some inappropriate statistical decisions in the analysis.

Because of their dissatisfaction with EPA’s report, my advisor and two other panel members decided to pursue related research to dig further into the issues. I enthusiastically accepted the invitation to join.   

Our collaborative group recently published two review papers on glyphosate. I’ll provide brief highlights of both below.

Reviewing our reviews, part 1: exposure to glyphosate  

In January 2019, we published a review of the evidence of worldwide exposure to glyphosate. Even though glyphosate-based products are the most heavily used herbicides in the world, we were surprised (and dismayed) to find less than twenty published studies documenting exposure in only 3721 individuals.

So, our paper mostly serves to highlight the limitations of the existing data:

  • These studies sampled small numbers of individuals from certain geographic regions, mostly in the US and Europe, and therefore are not representative of the full scope of global exposures
  • Most studies relied on a single urine spot sample, which does not represent exposure over the long term and/or in different agricultural seasons
  • The occupational studies only covered 403 workers in total, a serious deficiency given its widespread agricultural use. Few assessed exposure before and after spraying; and no studies evaluated patterns related to seasonality, crop use, etc.
  • Only two small studies evaluated how population exposure has changed over time. So, we definitely don’t know enough about whether the dramatic increases in global usage have resulted in similarly dramatic increased concentrations in our bodies. (Presumably, yes).  

In addition to highlighting the need to address the points above, we specifically recommended  incorporating glyphosate into the National Health and Nutrition Examination Survey (NHANES), a national survey that monitors exposure to many chemicals – including other common pesticides. This is an obvious and fairly straightforward suggestion; in reality, it’s quite bizarre that it has not already been incorporated into NHANES. Testing for glyphosate would allow us to better understand exposure across the US – which is not reflective of global levels, of course, but an important start.

Reviewing our reviews, part 2: glyphosate & non-Hodgkin Lymphoma (NHL)  

Our second paper, published earlier this week, was a meta-analysis of the link between glyphosate exposure and non-Hodgkin Lymphoma (NHL). Yes, diving right in to the controversy.

There had already been several prior meta-analyses that showed an association between glyphosate and NHL, but ours incorporates new research and applies a method that would be more sensitive to detecting an association.

A meta-analysis combines results from separate studies to better understand the overall association. While they technically do not generate any “new” data, meta-analyses are essential in the field of public health. A single study may have certain weaknesses, focus only on selected populations, or reflect a chance finding. In drawing conclusions about hazards (especially in this scenario, affecting millions of people and billions of dollars), we want to look across the collection of data from many studies so we can be confident in our assessment.

We were able to include a newly published follow-up study of over 54,000 licensed pesticide applicators (part of the Agricultural Health Study (AHS)). Compared to an earlier paper of the same cohort, this updated AHS study reports on data for an additional 11-12 years. This extension is important to consider, given that cancer develops over a long period of time, and shorter studies may not have followed individuals long enough for the disease to arise.

We conducted this meta-analysis with a specific and somewhat unusual approach. We decided to focus on the highly exposed groups in order to most directly address the question of carcinogenicity. In other words, we would expect the dangers (or, proof of safety: is it safe enough to drink?) to be most obvious in those who are highly exposed. Combining people who have low exposure with those who have high exposure would dilute the association. IMPORTANT NOTE: this approach of picking out the high exposure groups is only appropriate because we are simply looking for the presence or absence of a link. If you were interested in the specific dose-response relationship (i.e.: how a certain level of exposure relates to a certain level of hazard), this would not be ok.

Our results indicate that individuals who are highly exposed to glyphosate have an increased risk of NHL, compared to the control/comparison groups. This finding itself is not entirely earth-shattering: the results from prior meta-analyses were similar. But, it adds more support to the carcinogenic classification.

More specifically, we report a 41% increased risk. For comparison, the average lifetime risk of NHL is about 2%However, I want to emphasize that because our analytical method prioritized the high exposure groups, the precise numerical estimate is less important than the significant positive correlation. Basically, the purpose of this and other related assessments (like IARC’s) is to understand whether glyphosate is carcinogenic or not: this is a yes/no question. It is up to regulatory agencies to judge the scale of this effect and decide how to act on this information.

As with any scientific project, there are several limitations. In particular, we combined estimates from studies that differed in important ways, including their design (cohort vs. case-control), how they controlled for confounding by exposure to other pesticides, and which reference group they chose for the comparison (unexposed vs. lowest exposed). When studies are very different, we need to be cautious about combining them. This is another reason to focus more on the direction of the effect rather than the exact numerical estimate.  

Beyond the headlines

The news coverage of this work has focused on the overarching results (especially the 41% statistic), as expected. But I want to highlight a few other aspects that have been overlooked.

To better understand the timing of these studies in relation to glyphosate usage, we put together a timeline of market milestones and epidemiological study events.

 

Glyphosate_V9
This took me SO MANY HOURS.

Of note is that all of the studies conducted to date evaluated cancers that developed prior to 2012-2013, at the latest. Most were much earlier (80s, 90s, early 00s). As illustrated in the timeline, we’ve seen a huge increase in glyphosate usage since green burndown started in the mid-2000s. Yet none of these studies would have captured the effects of these exposures, which means the correlation should be easier to see in newer studies if/when they are conducted.

Also, as I mentioned above, we included the newly published AHS cohort study in our meta-analysis. One might expect the old and new AHS studies to be directly comparable, given that they were conducted by the same research group. However, our deep dive into both papers elucidated important differences; consequently, they are not directly comparable (see Table 8 of our paper). An in-depth discussion of these issues (and some of their potential implications) is a topic for a separate post, but there’s a clear lesson here about how important it is to carefully understand study design and exposure assessment methods when interpreting results.

Finally, two brief points on the animal toxicology studies, which we also reviewed in our paper because they provide complementary evidence for assessing hazard in humans. We discuss these data but did not conduct a formal pooled analysis (to combine results from separate but similarly designed animal studies), which would allow us to better understand overarching results from the animal studies. Anyone ready for a project?  

Additionally, in future animal toxicology studies, researchers should use the formulated glyphosate product that is actually used around the world rather than the pure glyphosate chemical that has been the focus of prior testing. There is growing evidence to suggest that the final formulated product is more toxic, perhaps due to the added adjuvants and surfactants. And this would allow for better comparisons to the human epidemiological studies, which assess effects of exposure to the formulated product.

Reflecting on the process

I had followed the evolving story on glyphosate with great interest for several years, so it was exciting to be part of these projects. Contributing to research with a real-world public health impact has always been a priority for me, and this high-profile research (affecting millions of people, billions of dollars) certainly fits the bill.

That being said, it was not an easy process. These two papers represent years of work by our group, which we did on top of our regular commitments. Collaborating with three researchers whom I had never met also proved challenging, since we did not have established rapport or an understanding of each other’s work and communication styles. So, in addition to gaining skills in conducting literature reviews and meta-analyses, I learned valuable lessons in group dynamics. 🙂

Given the high-stakes and high-profile nature of this work, we were extra meticulous about the details of this project. We knew that it would be scrutinized carefully, and any error could damage our credibility (especially worrisome for me, since I’m just establishing myself in my career). It took many, many rounds of review and editing to get everything right. A good lesson in patience.

Speaking of patience, I know that scientific research and related policy decisions take time. But I hope that these two projects can contribute to moving forward in a direction that protects public health.

 

Breastfeeding in the Age of Chemicals

It’s a catch-22 that would drive any new mother crazy.

Should she breastfeed, which is linked to many lasting health benefits for the newborn child, but take the risk of delivering toxic chemicals, such as dioxins and DDT, that are stored in her breast milk?

Or, should she use infant formula, which avoids the problem of breast milk contaminants but does not offer the same benefits to her newborn and may also contain toxic chemicals (because of lax food safety regulations or if contaminated water is used to reconstitute the formula, for example).

Last month, two papers (from the same group of collaborators) published in Environmental Health Perspectives attempted to address these issues by reviewing decades of relevant research. These papers are both quite extensive and represent impressive work by the authors – but it’s unlikely that non-scientists will wade through the details. So, I’ll do my best to help you out.

Breast milk vs. infant formula: What chemicals are in each?

The first paper starts by documenting all of the chemicals detected in either breast milk or infant formula, based on studies published between the years 2000-2014 (mostly in the United States). Below is a highly simplified table, with just the chemicals rather than other details (refer to the paper if you’re interested in more).

Screen Shot 2018-10-10 at 8.52.42 PM
Abbreviated list of chemicals detected in breast milk and infant formula in studies of women in the United States between 2000-2014. Adapted from Lehmann et al, 2018.
*No data from US studies, so information taken from international studies

 

What can we learn from these data, other than that it looks like complicated alphabet soup?

Well, toxic chemicals have been detected in both breast milk and infant formula, but there are some differences in the types of chemicals found in each. Breast milk is more likely to contain lipophilic (fat-loving/stored in fat) and long-lasting chemicals, such as dioxins and certain pesticides. By contrast, breast milk and formula both have some common short-lived chemicals, such as bisphenol-A (BPA) and parabens.

While the paper also provides information about the average and range of concentrations of chemicals in each medium (and how they compare to acceptable levels of exposure for infants), it’s hard to draw general conclusions because there are such limited data available. It is complicated, expensive and invasive to get samples of breast milk across wide segments of the population, and relatively few studies have looked at chemicals found in infant formula. We need more information before we can accurately understand the patterns of exposure across the population.

Nevertheless, the presence of all of these chemicals seems concerning. No one wants to deliver toxic milk to children during their early months of life, when they are more vulnerable because their organ systems and defense mechanisms are still developing.

But, what do the data indicate about the health consequences of these exposures?

Early dietary exposures and child health outcomes

That’s where the second paper comes in. Here, the same group of authors reviewed the literature on the association between chemicals in breast milk and adverse health outcomes in children. (Note: they had planned to ask the same question for infant formula, but there were not enough published studies). They looked at many chemicals (such as dioxins, PCBs, organochlorine pesticides, PBDEs) and many outcomes (including neurological development, growth & maturation, immune system, respiratory illness, infection, thyroid hormone levels).

Early studies in the field had indeed suggested cause for concern. For example, infants in Germany fed breast milk contaminated with high levels of PCBs were found to have neurodevelopmental deficits in early life. However, levels of PCBs in the general population have declined in recent years (because of worldwide bans), and subsequent studies in the same region found that these lower levels of PCBs were not associated with harmful neurodevelopmental effects.

Overall, when looking across various chemicals and health outcomes, the current literature is actually… inconclusive. Many studies reported no associations, and studies asking similar questions often reported conflicting results. Furthermore, studies that reported significant effects often evaluated health outcomes at only one or two periods in early life, and we don’t know if those changes really persist over time.

A glass half full…of challenges

In the end, the authors ended up with more questions than answers – and a long list of challenges that prevent us from understanding the effects of breast milk-related chemical exposures on children’s health. For example:

  • Chemicals in breast milk are often also present in the mother during pregnancy. How can we disentangle the effects of exposures during the prenatal period from exposures due only to breast milk in early postnatal life?
  • Many of these studies represent a classic case of “looking for your keys under the lamppost.” We can only study chemicals and outcomes that we choose to focus on, so we could be missing other important associations that exist.
  • On a related note, most studies focused on exposure to only one or a small group of chemicals, rather than the real-world scenario of the complex mixtures in breast milk.
  • There was little study replication (ie: more than one study looking at the same question). Generally, we feel more confident drawing conclusions based on a larger pool of studies.
  • The few studies that did ask the same questions often used different experimental designs. These distinctions also pose challenges for interpretation, since differences in how researchers measure exposures and outcomes could affect their results.
  • Most studies evaluated levels of chemicals in breast milk using one or two samples only. How accurate are these exposure assessments, given that levels in the milk may change over time?
  • Measuring chemicals in breast milk is just one aspect of exposure, but it doesn’t tell us how much the infant actually received. Mothers breastfeed for different amounts of time, which affects how much is delivered to the infant. These person-to-person differences within a study could make it challenging to see clear results in an analysis.

Filling in the gaps

Perhaps the only certain conclusion from these publications is that much work remains. Not only do we need more studies that document the levels of chemicals in breast milk and infant formula (as the first paper highlighted), but we also need more data on the links between these exposures and health outcomes – including targeted research to address the challenges and key gaps noted above.

Importantly, because breastfeeding is associated with many key health benefits (such as improved neurodevelopment and reduced risk of obesity, diabetes, infections, and more), any study that looks at the impact of chemical exposures in breast milk should also ask a similar question in a comparison group of formula-fed infants. It is likely that the positive effects of breast milk far outweigh any potential negative impacts from the chemicals in the milk, and that the infants would actually be worse off if they were fed formula that had the same level of chemicals (but did not receive the benefits of breast milk).

I’ll be the first to admit: it is scary to think about all of these chemicals in breast milk. But, all decisions have trade-offs, and here, when weighing the risks and benefits, the balance still seems to favor breastfeeding in most situations.

The Minamata Convention: Can We Make Mercury History?

This post was originally published on Envirobites.org

Last month, the Lancet Commission on Pollution and Health released a striking report estimating that pollution caused 9 million deaths worldwide in 2015 – 3 times more deaths than caused by AIDS, tuberculosis, and malaria combined. Air pollution was responsible for the vast majority of these deaths, but water and chemical pollution also contributed substantial burdens.

figure 5 from report
Global estimated deaths by major risk factor and cause, 2015; The Lancet Commission on Pollution and Health (2017)

One of the chemical pollutants highlighted by the Lancet Commission is mercury, a known neurotoxicant. The report discusses the dangers of mercury when used specifically in small-scale gold mining in low-income countries, yet populations across the world can also be exposed through fish consumption or consumer products, among other sources.

Well before the new Lancet report was released, the international community had recognized the dangers of mercury and had been working to develop policies to minimize exposure to this pollutant. In fact, on August 16, 2017, after sixteen years of work and negotiations, the Minamata Convention on Mercury entered into force.

This global treaty aims to protect human health and the environment from the toxic effects of mercury through restriction of mercury products and processes. It is the first new international convention in almost 10 years focused specifically on health and the environment. (Other previous treaties include the Basel Convention for hazardous waste, the Rotterdam Convention for pesticides and industrial chemicals, and the Stockholm Convention for highly persistent global pollutants).

The convention is named after the decades-long environmental health tragedy in Minamata, Japan. Residents and animals in this area developed severe neurological syndromes after eating seafood that had been highly contaminated with mercury from industrial pollution.

42550809_c5644144a7_b
https://www.flickr.com/photos/mrjoro/42550809/

Why Mercury?

Mercury is a naturally occurring metal, and certain chemical forms (specifically, methylmercury and metallic mercury vapor) are highly toxic. According to the World Health Organization (WHO), mercury is one of the top ten chemicals of public health concern. The nervous system – and in particular, the developing brain – is highly vulnerable to mercury. Exposure can result in permanent neurological damage. (Remember the Mad Hatter from Alice In Wonderland?) Other organ systems, such as the lungs, kidneys, and immune systems, may also be affected. The United Nations Environment Programme (UNEP) has stated that there is no safe level of mercury exposure.

How Are We Exposed Today?

Mercury is emitted through both natural and industrial processes. Examples of natural processes that release mercury include rock weathering, forest fires, and volcanic eruptions.

However, this global treaty targets mercury from industrial and human processes. These include coal burning, waste incineration, consumer products, and small-scale gold mining. Because mercury emissions travel through air and water without regard to political borders, only an international treaty could truly be effective in addressing this pollutant.

Human exposure to mercury occurs through several possible routes, including consumption of contaminated fish, inhalation of mercury vapors from the air, or even from the use of mercury in dental fillings.

Convention Commitments

The 84 countries that have already ratified the treaty (and the many other countries anticipated to fully join in the near future) will be required to take the following steps by 2020:

  • Phase-out or reduce mercury from products such as batteries, certain light bulbs, cosmetics, and pesticides
  • Control mercury air emissions from coal-fired power plants, waste incineration, and related industrial processes
  • Reduce or eliminate the use of mercury in small-scale gold mining
  • Reduce or eliminate the use of mercury in chemical manufacturing processes

The convention also provides guidance for safe storage of mercury, waste disposal, and contaminated sites.

Threats to U.S. Progress and Compliance

The U.S. Environmental Protection Agency (EPA) aims to address mercury pollution through numerous programs and regulations. But now, some of those efforts are under attack or subject to delay – threatening our prospects for reducing mercury exposure and complying with the convention.

For example, the Mercury and Air Toxics Standards (MATS) rule, passed under the Obama administration, limits the amount of mercury released from coal-fired power plants. The D.C. Circuit Court of Appeals had planned to review the cost-benefit analysis for this regulation but recently decided to delay the case instead. The Trump administration may actually decide to repeal the regulation altogether rather than defend the rule in court.

The administration’s vocal support for revitalizing the coal industry and the proposed repeal of the Clean Power Plan would further reverse progress that we have made in reducing mercury emissions. Recent shifts away from coal in this country have led to decreased mercury emissions and declining mercury contamination in tuna – historically, a significant exposure route for the population.

The current administration may also review a 2015 rule that set standards for disposal of coal ash, a byproduct of coal combustion. Improper disposal of coal ash in landfills can result in release of mercury, among other toxic chemicals.

These steps are hugely disappointing. Tackling this global pollution problem requires global action, and therefore the U.S. must continue to take strong steps to reduce mercury use and releases.

During these tumultuous times in particular, the ratification of this global treaty is an important victory for human health and the environment – and a reminder that we can still come together to make progress towards global health and sustainability. But, the realization of these goals requires political will and cooperation from all parties, and only time will tell if they can follow through on these targets.

High Impact Report on Low-Dose Toxicity

A condensed version of this post was originally published on The Conversation with the title “Can low doses of chemicals affect your health? A new report weighs the evidence.

Toxicology’s founding father, Paracelsus (1493-1541), is famous for his paraphrased proclamation: “the dose makes the poison.” This phrase represents a pillar of traditional toxicology: Essentially, chemicals are harmful only at high enough doses.

But, increasing evidence suggests that even low levels of “endocrine disrupting chemicals (EDCs)” can interfere with hormonal signals in the body in potentially harmful ways.

Standard toxicity tests don’t always detect the effects that chemicals can have at lower levels. There are many reasons for this shortcoming, including a focus on high dose animal testing as well as failure to include endpoints relevant to low dose disruption. And, even when the data do suggest such effects, scientists and policymakers may not act upon this information in a timely manner.

Recognizing these challenges, the U.S. Environmental Protection Agency (EPA) asked the National Academy of Sciences convene a committee to study the issue in detail. How can we better identify whether chemicals have effects at low doses? And how can we act on this information to protect public health?

After several years of work, the committee’s report was released in July. This landmark report provides the EPA with a strategy to identify and systematically analyze data about low-dose health effects, as well as two case study examples. It is an evidence-based call to action, and scientists and policymakers should take notice.

Delving into definitions

Before discussing the report, let’s review some definitions…

We know that animal experiments usually use high doses, but in comparison, what is a “low dose?”

This issue was a matter of considerable debate, but ultimately, the committee decided to proceed with a fairly general definition of low dose as “external or internal exposure that falls with the range estimated to occur in humans.” Therefore, any dose that we would encounter in our daily lives could be included, as well as doses that would be experienced in the workplace.

The committee also clarified the meaning of “adverse effects.” When a chemical produces a visible malformation, it is easy to conclude that it is adverse. But, when a chemical causes a small change in hormone levels, it is more difficult to conclusively state that the change is adverse. Are all hormone changes adverse? If not, what is the threshold of change that should be considered adverse?

In this context, an adverse effect was defined as “a biological change in an organism that results in an impairment of functional capacity, a decrease in the capacity to compensate for stress, or an increase in susceptibility to other influences.”

A strategy to identify low dose toxicity

With these semantics settled, the committee developed a 3-part strategy to help with timely identification, analysis, and action on low-dose toxicity information:

(1) Surveillance: Active monitoring of varied data sources and solicitation of stakeholder input can provide information on low dose effects of specific chemicals, especially since EPA’s standard regulatory testing framework may not always identify such effects. Human exposure and biomonitoring data should also be collected to help define relevant exposure levels of concern across the population.

(2) Investigation & Analysis: Systematic review and related evidence integration methods can be used to conduct targeted analysis of the human, animal, and in vitro studies identified in the surveillance step. Each of these approaches has different strengths and weaknesses, so examining the evidence together offers insight that a single approach could not provide.

(3) Actions: New evidence can be incorporated into risk assessments or utilized to improve toxicity testing. For example, protocols could be updated to include newly identified outcomes relevant to endocrine disruption.

Leading by example: systematic review case studies

To put their strategy into practice, the committee conducted two systematic reviews of low dose EDC effects.

The first case study looked at phthalates, chemicals that increase the flexibility of plastic products such as shower curtains and food wrapping.

The committee found that diethylhexyl phthalate and other selected phthalates are associated with changes in male reproductive and hormonal health. Overall, the data were strong enough to classify diethylhexyl phthalate as a “presumed reproductive hazard” in humans.

The second case study focused on polybrominated diphenyl ethers (PBDEs), flame retardants used for over 30 years. Though they are now being phased out, these chemicals remain a concern for humans. They are still present in older products and can persist in the environment for many years.

Based on data showing the impact of these chemicals on learning and IQ, the panel concluded that developmental exposure is “presumed to pose a hazard to intelligence in humans.”

Questions and challenges for the future

During its review, the committee encountered a variety of barriers that could impede similar investigations into specific chemicals.

First, when reviewing evidence, it’s important to assess any systematic errors – also known as biases – that might have led to incorrect results. These errors can arise from study design flaws, such as failure to properly blind the researchers during analysis.

Some journals have strict guidelines for reporting details related to bias, but many do not. Better adherence to reporting guidelines would improve scientists’ ability to assess the quality of evidence.

Second, the committee noted a discrepancy between the concept of doses used in human and animal studies, which made it difficult to compare data from different sources.

For example, most toxicologists simply report the dose that they delivered to animals. But some of that administered dose might not actually be absorbed. The actual internal dose of chemical circulating in the body and causing harm may differ from the amount that was administered. By contrast, epidemiologists usually think about dose as the level of chemical they detect in the body, but they may not know how much of the chemical an individual was actually exposed to.

Biological modeling techniques can help scientists draw the connection between administered and internal doses and more closely compare results from animal and human studies.

Finally, many toxicology studies focus on only a single chemical. This is a valuable way to identify how one chemical affects the body. However, given that we are all exposed to chemical mixtures, these procedures may be of limited use in the real world.

The committee suggested that toxicologists incorporate real-world mixtures into their studies, to provide more relevant information about the risk to human health.

Leveraging toxicity testing for answers about low dose effects

This report demonstrates one of the challenges facing the field of toxicology and environmental health: How well can existing and emerging laboratory techniques predict adverse outcomes in humans? (If you’ve read some of my previous posts, you know that this issue is of particular interest to me.)

Traditional animal experiments usually use high doses, which don’t necessarily reflect the real world. These studies can be an important first step in identifying health hazards, but they cannot accurately predict how or at what levels the chemicals affect humans. The committee noted that more relevant doses and better modeling could help mitigate this problem.

Emerging high-throughput testing techniques use cell-based methods to detect how a chemical changes specific molecular or cellular activities. These newer methods are increasingly used in toxicology testing. They have the potential to quickly identify harmful chemicals, but have yet to be fully accepted or validated by the scientific community.

For these two case studies, the committee noted that high-throughput tests were not particularly helpful in drawing conclusions about health effects. Many of these studies are narrowly focused – looking at, for example, just a single signaling pathway, without indicating a chemical’s overall influence on an organism. Nevertheless, these methods could be used to prioritize chemicals for further in-depth testing, since activity in one pathway may predict a chemical’s capacity to cause harm.

Putting the report into action

Despite the imperfections of our testing methods, there’s already ample evidence about low-dose effects from many chemicals (including the two cases studies from the committee). The EPA should implement this new strategy to efficiently identify and act on problematic endocrine-disrupting chemicals. Only through such strong, science-based efforts can we prevent adverse effects from chemical exposures – and allow everyone to live the healthy lives that they deserve.

Lessons from A Toxicology Detective Story: Use Caution with Controls

This article was originally posted on the Massive Science Consortium website, with the title “An Unexplained Result Shows Why Studying the Effects of Chemicals is so Tricky

Toxicologists are no strangers to mysteries. In fact, understanding unexpected results caused by unintentional chemical contamination in the laboratory has a storied history in the environmental health field.

In the late 1980’s, researchers at Tufts University accidentally discovered that certain plastic components caused their estrogen-responsive cells to grow uncontrollably. Their findings spurred extensive work on endocrine disrupting chemicals (EDCs), which can interfere with normal hormonal activity in the body. Bisphenol-A (BPA), found in many plastic products, is a common example of an EDC.

Following in this tradition, researchers at the University of Massachusetts (UMass) Amherst recently conducted some important toxicology detective work after they noticed that mammary glands of adult male mice raised in a commercial laboratory (that supplies mice for scientists) were larger and more developed than the mammary glands of the same type of mice raised in their own laboratory.

Scientists use rodent mammary glands as models of human breasts, which allow them to better understand growth and development as well as risk factors and treatments for diseases like breast cancer. Toxicologists have paid particular attention to how mammary glands change after exposure to EDCs, since the chemicals can interfere with hormonal activity and mammary glands are especially responsive to hormonal signals.

So, when the UMass Amherst researchers noticed a difference in the mammary glands between the two groups of theoretically similar mice, it set off some alarm bells. Could an unidentified EDC exposure in the commercial lab be the culprit? And if so, could this potential EDC exposure impact the ability of scientists and policy-makers to draw conclusions from toxicological experiments?

Comparing Gland Growth

To find out, the researchers carefully compared the mammary glands and blood hormones between the two groups of mice. One group was ordered directly from a commercial supplier. The other group was ordered from the same commercial supplier and then bred for two generations to produce offspring that were raised in their own lab under controlled conditions to minimize exposure to EDCs.

The findings supported their preliminary observations: the mammary glands in the male mice raised in the commercial lab were larger and more developed than those of male mice raised in their own lab. The researchers noted that the commercially-raised mouse mammary glands actually mirrored those of mice from different experiments that had been intentionally exposed to BPA during early development.

By contrast, female mice had smaller and less developed mammary glands at puberty. While this difference in response may at first seem counter-intuitive, endocrine disruptors are complicated. Because they interfere with hormones, which are gender-specific, EDCs have the potential to affect males and females in different ways.

Contemplating the Culprit

Although the researchers found differences between the two sets of mice, they didn’t immediately know why. A simple explanation would have been that different amounts of circulating hormones at the time they examined the mice might have influenced their body composition. Yet, they detected no significant differences in estrogen, the primary hormone that drives mammary gland development, between the two groups.

It is also unlikely that genetic differences could have contributed to the distinct mammary gland growth, given that the non-commercially raised animals were actually only two generations removed from the original commercial strain. Such drastic genetic changes do not usually occur over such short cycles.

Therefore, the researchers hypothesized that the difference was more likely due to the effects of EDC exposures during their early life in the commercial laboratory. (However, they were not able to confirm this theory through specific tests.) Exposures to EDCs, among other chemicals, during sensitive windows of development in early life have the potential to cause long-lasting changes, including in the mammary gland.

Considering the Consequences

These findings, which suggest that animals raised in commercial labs may be exposed to EDCs, could impact how we interpret toxicological studies.

One main reason is what researchers call the “two-hit” model, which suggests that an exposure in early life makes an individual more sensitive to the effects of a second exposure later in life. In this context, early exposures to EDCs might prime laboratory animals for more pronounced responses when treated with a test chemical in an experiment later in life. In other words, some laboratory experiments may be erroneously linking the test chemical to an outcome that is actually due to a combination of the test chemical and being exposed to EDCs earlier in life.

This study only evaluated animals from one commercial lab, and conditions may differ in other labs. The possibility that animals could be subject to different unintentional exposures may affect our ability to compare studies and pool data from diverse labs to make science-based policies. This issue may also partially explain why research and policy on EDCs has been so highly controversial, with distinct labs generating very different results about the same chemicals.

These differences also have implications for a controversial practice in the field of toxicology: the use of “historical controls.” Sometimes, scientists compare the changes in treated animals from one study to a database of untreated animals from a collection of previous studies. This practice can provide researchers with a better sense of whether the treated animals they are studying are truly different from normal. Yet, this study suggests that control animals raised in some laboratories may be exposed to EDCs, and therefore it would not be appropriate to compare them to treated animals raised in different environments. The presence of too many differing variables would make it difficult to make an accurate comparison.

Taken together, these findings suggest that both scientists and the public should be cautious when interpreting certain studies. While the researchers did not conclusively confirm the linkage to early life EDC exposure, this highly likely explanation illustrates that, at least in some cases, there may be factors behind the scenes that could be influencing the results of toxicology experiments.

How should the scientific community address the implications of this study? While researchers could try to mitigate the impact of these exposures on their subsequent toxicological experiments by screening animals prior to beginning their work, a better approach would be to improve handling of animals in commercial facilities through strict standards. Exposures to EDCs, among other chemicals, should be minimized. If such exposures continue, it may be hard to trust the results of these toxicological studies, which could impede the development of appropriate, evidence-based environmental health policies and protections for the population.

A Decade into the “Vision,” Environmental Health gets a Progress Report

This year represents an important 10-year milestone for science and society.

No, I’m not referring to the 10th anniversary of the Apple iPhone, though that has undoubtedly changed all of our lives. Rather, 2017 marks ten years since the National Academy of Sciences (NAS) released its seminal report, Toxicity Testing in the 21st Century: A Vision and a Strategy.

In that report, the NAS laid out a vision for a new approach to toxicology that incorporates emerging cell-based testing techniques, rather than costly and time-intensive whole animal models, and utilizes early biological pathway perturbations as indications of adverse events, rather than relying on evaluations of end disease states. Tox21 and ToxCast, two federal programs focused on using alternative assays to predict adverse effects in humans, were initiated as first steps in this strategy. In the years since its release, the report has profoundly shaped the direction of environmental health sciences, particularly toxicology. (An analogous exposure sciences report, Exposure Science in the 21st Century: A Vision and a Strategy, was published in 2012.)

Now, one decade later, the NAS has reviewed progress on these efforts in its recently released report, Using 21st Century Science in Risk-Based Evaluations.

How are we doing, and what are next steps?

Overall, the committee supports efforts to use data from new tools, such as biological pathway evaluations, in risk assessment and decision-making. (Of course, limitations should be clearly communicated, and tools should be validated for their specific purposes.) Several case studies are described as examples of situations where emerging tools can be useful, such as quickly prioritizing chemicals of concern or evaluating risks from chemical mixtures at a contaminated site.

This report also documents advancements and challenges for each of the three interconnected fields of environmental health sciences: toxicology, exposure science, and epidemiology. I’ve summarized some of these key points in the chart below, and additional (digestible) information is available in the NAS report summary.

  

 

Recent Advancements

Key Challenges

Toxicology

  • Incorporate metabolic capacity in in vitro assays
  • Understand applicability & limitations of in vitro assays
  • Improve biological coverage
  • Address human variability & diversity in response

Exposure Science

  • Coordination of exposure science data (ex: databases)
  • Integration of exposure data of multiple chemicals obtained through varied methods

Epidemiology

  • Improved data management & data sharing
  • Improved methods for estimation of exposures

I won’t go into detail on all of these points, but I do want to highlight some of the key challenges that the field of toxicology will need to continue to address in the coming years, such as:

  • Improving metabolic capacity of in vitro assays: Cell-based assays hold promise for predicting biological responses of whole animals, but it is critical to remember that these new tools rarely reflect human metabolic capacity. For example, if a chemical is activated or detoxified by an enzyme in our bodies, reductionist assays would not adequately reflect these changes – and thus their prediction would not be fully relevant to human health. We need continued work to incorporate metabolic capacity into such assays.
  • Improving biological coverage: An analogy that I’ve often heard in relation to the limitations of these new tools is that they are only “looking under the biological lamp post.” Essentially, we can only detect effects that the assays are designed to evaluate. So, we need further development of assays that capture the wide array of possible adverse outcomes. And we cannot assume that there is no hazard for endpoints that have not been evaluated.

New models of disease causation

Not only is the environmental health science ‘toolkit’ changing but also our understanding of disease causation. As discussed in the report, 21st century risk assessment must acknowledge that disease is “multifactorial” (multiple different exposures can contribute to a single disease) and “nonspecific” (a single exposure can lead to multiple different adverse outcomes). This advanced understanding of causality will pose challenges for interpreting data and making decisions about risk, and we will need to incorporate new practices and methods to address these complexities.

For example, we can no longer just investigate whether a certain exposure triggering a certain pathway causes disease in isolation, but also whether it may increase risk of disease when combined with other potential exposures. It gets even more complicated when we consider the fact that individuals may respond to the same exposures in different ways, based on their genetics or pre-existing medical conditions.

The Academy suggests borrowing a tool from epidemiology to aid in these efforts. The sufficient-component-cause model provides a framework for thinking about a collection of events or exposures that, together, could lead to an outcome.

screen-shot-2017-01-22-at-10-39-18-pm

Sufficient-component-cause model. Three disease mechanisms (I, II, III), each with different component causes. Image from NAS Report, Using 21st Century Science to Improve Risk Related Evaluations

 

Briefly, each disease has multiple component causes that fit together to complete the causal pie. These components may be necessary (present in every disease pie) or sufficient (able to cause disease alone), and different combinations of component causes can produce the same disease. Using this model may promote a transition away from a focus on finding a single pathway of disease to a broadened evaluation of causation that better incorporates the complexities of reality. (I’ve blogged previously about the pitfalls of a tunnel-vision, single pathway approach in relation to cancer causation.)

Integration of information, and the importance of interdisciplinary training

As the fields of toxicology, exposure science, and epidemiology continue to contribute data towards this updated causal framework, a related challenge will be the integration of these diverse data streams for risk assessment and decision-making. How should we weigh different types of data in drawing conclusions about causation and risk? For example, what if the in vitro toxicology studies provide results that are different than the epidemiology studies?

The committee notes that we will need to rely on “expert judgment” in this process, at least in the short term until standardized methods are developed. And they discuss the need for more interaction between individuals from different disciplines, so that knowledge can be shared and applied towards making these difficult decisions.

One issue that was not discussed, however, is the importance of training the next generation of scientists to address these complex challenges. Given the inevitable need to integrate multiple sources of data, I believe it is critical that the students in these fields (like me!) receive crosscutting training as well as early practice with examples of these multi-faceted assessments. Some programs offer more opportunities in this area than others, but this should be a priority for all departments in the coming years. Otherwise, how can we be prepared to step up to the challenges of 21st century environmental health sciences?

Looking forward

Speaking of challenges, we certainly have our next decade of work cut out for us. It is amazing to think about how much progress we have made over the last ten years to develop new technologies, particularly in toxicology and exposure sciences. Now we must: refine and enhance these methods so they provide more accurate information about hazard and exposure; address the complexities of multifactorial disease causation and inter-individual susceptibility; and work across disciplines to make decisions that are better protective of public health and the environment.

Ancient philosophy, modern toxicology

The whole is greater than the sum of its parts.”
– Aristotle (384 BC-322 BC)

 

Aristotle was talking about metaphysics and the emergent theory, but his insight corresponds to an important emerging theory in environmental health: combinations of different chemicals acting together in our bodies can produce larger (or different) effects than would be seen if each chemical were acting independently. In technical terms, this is called “synergism.”

Why does this matter? Through the course of our daily lives, we are all exposed to hundreds of different types of chemicals. Most laboratory toxicity studies, however, only assess the effects of a single compound in a carefully controlled environment. Consequently, the (very limited) data that we have on chemical hazard do not actually reflect real-world exposure situations (ie: co-exposures to mixtures of chemicals). Researchers are beginning to address this deficiency, though, and initial results suggest that Aristotle’s ancient wisdom is eerily relevant to modern-day toxicology.

A recent study published in the journal Toxicological Sciences examined the interaction between polycyclic aromatic hydrocarbons (PAHs) and arsenic. PAHs are organic pollutants that are produced during combustion processes (including from tobacco). Many PAHs, such as benzo[a]pyrene, can cause DNA damage and are known or suspected to cause cancer. Arsenic is a naturally occurring element that can exist in different chemical forms. The inorganic form As+3 can interfere with DNA repair and is linked to skin diseases and cancer. Human exposure to As+3 often occurs through ingestion of contaminated drinking water or rice-based products. Many people around the world are exposed to both PAHs and inorganic arsenic simultaneously, but little is known about how these two chemicals — one that causes DNA damage, and one that interferes with DNA repair – act together in the body.

For this work, researchers examined the effects of As+3 and three specific PAHs (benzo[a]pyrene and two metabolites, BP-Diol and BPDE) separately and together in mouse thymus cells (precursors to T-cells). Because T-cells serve a critical function in the immune system, chemical damage could lead to immune dysfunction.

After chemical treatment, the researchers measured the amount of DNA damage and DNA repair inhibition. At specific combinations of doses (one with As+3 and BP-Diol, and one with As+3 and BPDE), they saw a larger effect from treatment with two chemicals simultaneously than what would have been predicted from treatment with the same chemicals individually.

Next, they measured cell death (specifically, apoptosis) and found that while individual exposures to As+3 and BP-Diol did not increase death, exposure to the compounds together caused a synergistic increase in the percentage of dead cells. One possible explanation for this result is that at low levels of separate exposure, the body can adapt to prevent damage. But perhaps with the two chemicals together, the system is overwhelmed and cannot compensate.

Overall, based on these and other related results in this study, the researchers hypothesized that the As+3 increases the toxicity of certain PAHs through its ability to inhibit DNA repair pathways. As I noted above, PAHs alone can cause DNA damage. With the addition of As+3, which interferes with DNA repair during normal cell cycle replication, cell damage is even greater.

Previous work had documented the existence of similar interactions between PAHs and arsenic, but those studies had used high doses that were not representative of potential human exposures. This study, by contrast, investigated the effects of low-level exposures that are more similar to what we might encounter in the environment.

One important caveat of this work is that the researchers conducted the experiment in isolated mouse thymus cells. In vitro systems (or “test tube experiments”) are increasingly common in toxicology, as the field aims to find alternatives to whole animal testing. However, there are obvious limitations to these models. Not only are mouse cells different from human cells, but these mouse thymus cells are separated from the rest of their system and may not represent how a fully functional organism responds and/or adapts to a toxicant exposure. As follow-up, researchers should test this chemical combination in a relevant animal model to see whether similar results are obtained.

Nevertheless, this study provides important evidence of synergistic effects from low-level exposures to two common environmental contaminants. And these data may be just the tip of the iceberg. What other potential interactions exists between the thousands of other chemicals that we are exposed to over the course of our lives? The challenge with synergistic interactions is that they cannot always be predicted from testing individual chemicals. (I’ve written about this previously, specifically with regard to cancer processes.) It is daunting to think about testing all of the potential combinations that may exist, since our public health agencies are struggling to generate even basic toxicity data on all of these chemicals individually.

I wish we could consult Aristotle on this problem.

One strategy to start to address this challenge could be to prioritize testing combinations of chemicals that – like the pair chosen in the study described here – are most common across the population. Existing biomonitoring efforts, such as the U.S. National Health and Nutrition Examination Study (NHANES), could guide the selection of appropriate mixtures. Testing these highly relevant chemical combinations could provide valuable information that could be immediately translated into risk calculations or regulatory standards.

As Plato, another great ancient thinker said, “the beginning is the most important part of the work.” So, while it is definitely overwhelming to think about tackling the question of chemical combinations, it is crucial that we take first steps to make a start.

Environmental exposures and autism: decoding brain transcriptional patterns

Recent evidence indicates that individuals with autism exhibit characteristic gene expression changes in the brain. Specifically, they often have increased expression of genes related to immune and microglial function and decreased expression of genes related to synaptic transmission. Recognizing such patterns, researchers at the University of North Carolina-Chapel Hill asked an intriguing question: can these distinctive changes be used to identify chemicals that might contribute to risk of Autism Spectrum Disorder (ASD)?

For this work (recently published in Nature Communications), the researchers screened 294 chemicals from the US EPA ToxCast Phase I library in mouse cortical neuron-enriched cultures. While such 2D cultures do not represent the complexity of a fully functioning brain, it should be noted that gene expression profiles of their cultures revealed that their system was highly reflective of a whole embryonic brain during mid to late gestation (a critical period of development – the disruption of which has been linked to ASD).

After treatment, they monitored gene expression and created six clusters of chemicals based on the patterns of changes observed. While all of the chemical clusters represent potentially consequential biological changes, the researchers focused in particular on cluster 2, which up-regulated expression of immune and cytoskeletal-related genes and down-regulated expression of ion channel and synaptic genes. These patterns mirror changes observed in post-mortem ASD brains. (Interestingly, the gene expression changes also correlated with patterns observed in Alzheimer’s disease and Huntington’s disease, which suggests that neurodevelopmental and neurodegenerative diseases may share common pathways and pathology.) In addition to the observed gene expression changes, the cluster 2 chemicals also led to oxidative stress and microtubule disruption – effects that are implicated in neurodevelopmental and neurodegenerative disorders.

The results of this work are as unsettling as they are groundbreaking. The chemicals in cluster 2 are EPA-approved pesticides and fungicides: famoxadone, fenamidone, fenpyroximate, fluoxastrobin, pyraclostrobin, pyridaben, rotenone, and trifloxystrobin. (Rotenone might sound familiar to you; it has already been linked to Parkinson’s disease.) Several of these chemicals can be found at high concentrations on conventionally grown food crops, such as leafy green vegetables – yet another reason to buy organic. And, given that usage of some of these chemicals seems to be increasing (see Figure 7 of their paper), exposure across the population is a concern.

Correlation does not mean causation, however, and therefore this study does not prove that these chemicals trigger autism. Furthermore, as noted above, this study was conducted in cell culture using mouse neurons and therefore is not fully representative of what would happen in an actual human brain. But, we can use these findings as an important warning and should now prioritize these chemicals for further evaluation in animal studies (and also, perhaps, develop relevant epidemiological studies to monitor population-level effects given existing exposures).

While we lack general hazard information on most of the thousands of chemicals in commerce, the absence of information about potential developmental neurotoxicity is a particular problem. This study demonstrates that evaluation of gene expression changes could provide a screening-level assessment that might help to fill some of these concerning gaps. In addition, the authors suggest that their approach could be used to help identify therapeutics that could counter these disease-related gene expression changes – perhaps the first step towards treatment for this increasingly common condition.

Assessing new methods for detecting obesogens

The Environmental Protection Agency (EPA) has invested tremendously in its new toxicity-testing program, ToxCast, which aims to use in vitro high-throughput screening (HTS) assays to evaluate the effects of thousands of chemicals and prioritize them for further in vivo testing. Yet, many questions remain regarding the reliability and relevance of these assays. For example, are they providing accurate predictions about the effects of interest? Are the assays consistent over time and between laboratories? And, ultimately, do we have enough confidence in the results to use them as the basis for decision-making?

While EPA has begun to evaluate some of their assays, a recently published article in Environmental Health Perspectives reports specifically on the performance of ToxCast assays and related tools in detecting chemicals that promote adipogenesis. Such “obesogenic” chemicals interact with pathways involving the peroxisome proliferator activated receptor (PPARγ), among others, to alter normal lipid metabolism and contribute to abnormal weight gain. (Note: the term “obesogen” was coined by Bruce Blumberg, the senior author of this paper).

For the first part of this work, the researchers evaluated ToxCast results for one specific pathway in adipogenesis. Of the top 21 chemicals that were reported to bind to PPARγ in ToxCast Phase I, only 5 were actually found to activate PPARγ in their own laboratory.

Next, they examined the predictive power of multiple ToxCast assays representing various pathways related to adipogenesis. The researchers chose eight biologically relevant targets (including PPARγ) and generated a ToxPi (Toxicological Priority Index) graphic based on assay results for the chemicals (see figure from the paper, below). Each color represents a specific target evaluated by one or more assays, and larger slices correspond to higher relative activity in those assays. In this way, they could combine the results of multiple assays for each chemical and easily compare adipogenic potential.

Screen Shot 2016-01-31 at 9.26.29 AM
Adipogenesis ToxPis. Supplemental Figure from: Janesick et al, 2016: On the Utility of ToxCast and ToxPi as Methods for Identifying New Obesogens. Environmental Health Perspectives.

How well do these ToxPis – created based on weighted results from ToxCast assays – predict actual PPARγ activation and overall adipogenic activity? The researchers found that only 2 out of 11 highest scoring ToxPi chemicals could activate PPARγ in their laboratory assays, and only 7 out of 17 top and medium scoring ToxPi chemicals were active in cell culture adipogenesis assays. In addition, 2 of the 7 chemicals that appeared negative in ToxPi actually promoted adipogenesis in culture.

EPA had previously recognized the potential for false positive and false negatives in the testing program and had begun to implement correction methods, such as z-score adjustments, in more recent ToxCast phases. Unfortunately, problems remained even after researchers considered these supposed improvements. While many false positives and false negatives were removed, the true positives were also eliminated.

These results are concerning, to say the least. Why are the ToxCast assays performing so poorly in predicting PPARγ activity and overall obesogenic potential? The researchers suggest several possible reasons, including 1) the fact that there are relatively few specific obesogenic assays that have been developed (especially compared to estrogen and androgen receptor assays), and 2) the inherent difficulties in using simple receptor binding tests to reflect the complexities of the endocrine system.

These issues must be resolved if we are to move forward with the goal of using these assays for prioritization and risk assessment. Last year, EPA announced (see here and here) that they would allow the use of a combination of ToxCast estrogen receptor assays to replace several existing tests in the Endocrine Disruptor Screening Program (EDSP). Clearly, however, other areas of the ToxCast program need additional refinement and validation before they can be used confidently for regulatory purposes.

While it is discouraging to read about these weaknesses in ToxCast, such external assessments are essential and will motivate important improvements. With more input from and collaboration with the scientific community, we can be hopeful that EPA’s ToxCast program will be able to fulfill its goal of efficiently evaluating thousands of chemicals and serving as the basis for decision making to protect public health.