Yes, peer-review has its problems. However, without peers, there's no peer-review at all. Therefore, publishing original #statistics research in non- #stats journals is - well, not the best idea.

Yet, #sportsscience did it (again).

A thread ...
1/n
Unfortunately, the method of Niiler (2020) tends to reject a correct null-hypothesis far more often than the chosen significance level alpha (➡️invalid inference ‼️).

Here's a simulation study to demonstrate that the size of the test is way larger than alpha=0.05 ⤵️
3/n
This thread is to make the #AcademicTwitter community aware of the flawed method in Niiler (2020).

In case you're wondering:
Yes, I contacted the author and tried to start a dialog, but got no response.

So, I submitted a "letter to the editors" of Gait & Posture.
5/n
It felt really strange to do this, but doing nothing wasn't an option either. My letter can be found here:

http://www.dliebl.com/files/Liebl_2020_LttE_G&P.pdf

I'll keep you posted about the process.

6/end

PS: Yes, there's still research besides COVID research 😉.
Btw: This is the alpha level correction in Niiler(2020). M is the number of tests, alpha is the signif level (e.g. alpha=0.05) and rho is the first-order auto-correlation in a time-series of test-statistics. All this is completely ad-hoc and there's no theoretical justification.
Niiler (2020) did an MC-simulation to "demonstrate" the correct size. However, he chose a data generating process for which the series of test statistics are constant. So, for this process, there was no multiple testing problem. He didn't see it since his method is also biased.
You can follow @domliebl.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: