A recent study in @nature couldn't replicate drastic CO2 effects on coral reef #fish behaviour & empirically found no effect of #oceanacidification https://go.nature.com/3hK49UR
Our #metaanalysis of the past decade on this topic concurs https://ecoevorxiv.org/k9dby/
Breakdown thread
Our #metaanalysis of the past decade on this topic concurs https://ecoevorxiv.org/k9dby/
Breakdown thread

We demonstrate one of the most striking examples of the #DeclineEffect in #ecology to date, w/ reported effects of OA on fish behaviour all but disappearing over past decade
If youâve never heard of the #DeclineEffect see: https://bit.ly/2EbZX2o
If youâve never heard of the #DeclineEffect see: https://bit.ly/2EbZX2o
Qualitatively the number of studies reporting âstrongâ effects of #oceanacidification on fish behaviour have plummeted over time
Quantitatively effect size magnitudes (log response ratio) have declined from averages of 3-4 in early pioneering studies to 0.2â0.4 over past 5 years
While highly significant in early years, mean effect size magnitudes have been non-significant from zero for 4 of past 5 years
While highly significant in early years, mean effect size magnitudes have been non-significant from zero for 4 of past 5 years
To check if this #DeclineEffect was due to increasing studies on cold-water species, we removed cold-water studies (b/c these species may be more tolerant to OA than tropical coral reef species)
NopeâŠ
NopeâŠ
But maybe OA only has an effect when some type of cue or stimulus is involved â after all, the biggest effects are with predator cues!
Again, nopeâŠ
Again, nopeâŠ
OK, so if itâs not biological, then what could it be? Could it be⊠#BIAS!?

We first checked for methodological bias
Underpowered studies are prone to Type I error & can often detect strong effects when they donât actually exist
Do studies showing super-strong effects have low sample sizes?
Yep...
Underpowered studies are prone to Type I error & can often detect strong effects when they donât actually exist
Do studies showing super-strong effects have low sample sizes?
Yep...
We also found that over time, avg n & proportion of studies w/ n>30 have increased
This suggests that the number of fish used in experiments partly explains the #DeclineEffect, but some high-n studies show strong effects, so n is not everythingâŠ
This suggests that the number of fish used in experiments partly explains the #DeclineEffect, but some high-n studies show strong effects, so n is not everythingâŠ
OK, but lots of fields have underpowered studies â whatâs the harm?
Are studies w/ super-strong effects more likely to be published in influential high IF journals & thus get more attention?
âŠ.
Are studies w/ super-strong effects more likely to be published in influential high IF journals & thus get more attention?
âŠ.


Like sample size, we also saw that the average IF of journals publishing papers decreased over time
Note the strong blip in IF for 2014 which was accompanied by a similar blip in mean effect size for that year!
Strong evidence for selective publishing
Note the strong blip in IF for 2014 which was accompanied by a similar blip in mean effect size for that year!
Strong evidence for selective publishing
This study provides strong evidence that dramatic reports of OA affecting fish behaviour are probably exaggerated &, frankly, false
The strong effects appear linked w/ methodological bias, selective publication of outstanding effects, and other unexplained phenomena
The strong effects appear linked w/ methodological bias, selective publication of outstanding effects, and other unexplained phenomena
We suggest that OA-fish behaviour studies be given more weight when n>30 fish per treatment
It is imperative that authors REPORT SAMPLE SIZE PRECISELY!!!!
A massively frustrating part of this study was trying to decipher n â 34% of studies didnât report it adequately!!
It is imperative that authors REPORT SAMPLE SIZE PRECISELY!!!!
A massively frustrating part of this study was trying to decipher n â 34% of studies didnât report it adequately!!
Reviewers & editors can also help here by being skeptical & critical of manuscripts reporting outstanding effects, especially those w/ n<30
We also strongly suggest that unbiased results be published early and alongside studies showing strong effects
How can we do this?
PRE-REGISTRATION! https://go.nature.com/3c2Q43v
How can we do this?
PRE-REGISTRATION! https://go.nature.com/3c2Q43v
Itâs also important for null results to be published in high IF journals so they are given equal public weighting
A scary anecdote w/ this paper: itâs been desk rejected by 5 high IF journals that previously published those extreme OA effects, each taking >1 month to decideâŠ
A scary anecdote w/ this paper: itâs been desk rejected by 5 high IF journals that previously published those extreme OA effects, each taking >1 month to decideâŠ
Researchers should also incorporate best practices for behavioural studies whenever possible
for example, use published methodological guidelines such as https://bit.ly/2RAUghe & use automated technologies for recording behaviour if possible
for example, use published methodological guidelines such as https://bit.ly/2RAUghe & use automated technologies for recording behaviour if possible
Finally, be critical! Especially of early findings w/ large effects
Scientists are good at predicting which studies are likely to replicate & which ones wonât (see https://go.nature.com/3mtsCRK ) â apply that skepticism early!!
Scientists are good at predicting which studies are likely to replicate & which ones wonât (see https://go.nature.com/3mtsCRK ) â apply that skepticism early!!
Does #oceanacidification affect marine animals?
In many instances, yeah...
But in light of our results & those of the non-replication @nature study ( https://go.nature.com/3hK49UR ), direct effects of OA on fish behaviour are likely small
In many instances, yeah...
But in light of our results & those of the non-replication @nature study ( https://go.nature.com/3hK49UR ), direct effects of OA on fish behaviour are likely small