I've been trying to figure out that recent paper in @ScienceMagazine about pesticide toxicity over the last few days. There are a few things that don't make sense to me, so I'm going to make a thread here as kind of my scratch pad. Maybe I'll write a blog post about it later.
This is the paper: https://science.sciencemag.org/content/372/6537/81
In general, the approach they took is ok. They basically used the same quotient approach that I used in my recent herbicide toxicity paper. https://www.nature.com/articles/ncomms14865
The risk quotient (or hazard quotient) method is taking the amount of a substance divided by the toxicity value (e.g. LD50). This creates an intuitive metric for comparison. If the quotient goes up, toxicity increased. If the quotient goes down, toxicity decreased.
I have (I think?) three points of concern about the paper:
1) lots of missing values in their tox data; not sure if that influences their results.
2) broad trends aren't necessarily wrong, but there's important context missing.
3) The tox data they chose is weird.
1) lots of missing values in their tox data; not sure if that influences their results.
2) broad trends aren't necessarily wrong, but there's important context missing.
3) The tox data they chose is weird.
First the missing data. I'm focusing here mostly on the herbicides, because that's the pesticide type I know best. Missing tox values are potentially (but not necessarily) problematic because they can bias the risk quotient up or down depending on which pesticides are missing.
If the tox values are missing mostly at random, and equally spaced across the time period analyzed, then it probably wouldn't impact the overall trend much. But if, for example, a pesticide with missing tox data was used a LOT 20 years ago but not now, it could impact the trend.
So I was mainly interested to see if there were patterns or trends in *when* the pesticides with missing tox values were used. And here's the proportion of total herbicide kilograms applied that were missing tox values (RTL, or regulatory threshold levels).
At the beginning of the period, around 10% of the applied herbicide weight did not have an RTL, and thus would be excluded from the total applied toxicity trends they reported for terrestrial plants.
That trend differed depending on crop, though. I didn't look through all the crops, but for soybean and cotton, the trend was't monotonic; the herbicides without tox values made up around the same weight applied at the beginning and end, but it was kind of variable.
But for corn, the trend was really evident. Almost 15% of the herbicide weight applied early in the period had no tox values, compared to less than 5% after ~2000.
Does any of that missing data matter? I really don't know. It depends on what RTL values those herbicides would have if they existed, and so it is really unclear.
I was criticized a little bit in my toxicity paper for my choice of toxicity measures (Rat LD50 and Rat Oral NOEL) because they weren't necessarily the most indicative of human toxicity. Which is a fair criticism; it would have been better to use reference doses (RfDs).
The reason I chose rat values instead of RfDs is that RfDs were not available for some of the old herbicides, and I didn't want to exclude those herbicides because it would necessarily bias the data toward an upward toxicity trend. So I used the most complete tox data I could.
I'm going to save #2 for another day after I've had even more time to look through the data. But I do want to note my concern about #3, the choice of toxicity endpoint. https://twitter.com/WyoWeeds/status/1380369130390360065
As I mentioned upthread, the tox values are what they call RTLs, or regulatory threshold values. At first glance, I assumed this was something like the reference dose I mentioned (RfD), and so thought it was a good, reasonable choice. But...
for at least some of the organism groupings, I really don't understand how these RTL values relate to actual toxicity. I'll give an example with herbicides.
This table is a selection of herbicides, the RTL values for terrestrial plants they provide in their supplementary information (mg/ha), RTL converted to something more reasonable (kg/ha), and a typical use rate for the herbicide that I looked up.
The last column in the table is just the RTL divided by the use rate. For herbicide impacts on terrestrial plants, I would have assumed that this ratio would be really similar between herbicides. But it is not - it differs by several orders of magnitude (0.0005 to 0.5).
And this, I don't understand. Herbicide use rates are typically set at a rate that will provide extremely high efficacy (well over 99% control of target weeds), but not so high that it costs the manufacturer more money to produce and sell than is necessary.
It doesn't make any sense that acetochlor (applied at 1 to 3 kg/ha) would have a toxicity (RTL) value of 0.0014, but glyphosate (applied at ~1 kg/ha) would have a toxicity value of 0.14. This suggests that acetochlor is 100 times more toxic to terrestrial plants than glyphosate.
Seriously - acetochlor is NOT 100 times more toxic to terrestrial plants than glyphosate. This is not debatable. Glyphosate is FAR more toxic to nearly any terrestrial plant than acetochlor.
Other comparisons of use rates to RTL values show similar confusing disconnects.
Which means: the RTL values are NOT a very good indicator of toxicity, at least not for terrestrial plants. I don't know if this same issue exists in other of their groups (pollinators, mammals, etc.). Will try to look at that later.
I'm going to bed now.