First of all, I should state that I do have some sympathy for the problem highlighted by the authors. Yes, it is worrying when there are mistakes in published literature. However, the cure implicit to this article is worse than the problem.
The fundamental assumption underlying this article is that only statisticians are qualified to introduce or critique statistical methods. Sport scientists should only be allowed to publish work if they have been chaperoned by a statistician. This has some profound implications:
Essentially, the authors want to institute a new layer of quality control into the research process which is based on the authority of expert opinion. That seems reasonable right?
Hell no!

Such a situation is contrary to the basic precepts of scientific society. These were probably most famously described by Robert Merton:
A key part of Merton's description of science is that it is an open society. No one has precedence over another because of their education, status or role. Rather, all arguments should be treated equally, independent of source, and judged on their merits.
So, the argument that is implicit to their paper is to introduce a new caste of scientists who have special authority. Mikael Bakunin warned against this very situation in "God and State".
Who defines those who are qualified to comment on statistical matters? This is vague in the article, but seems to be either:

1. They are employed by a statistics or methods department;
2. They have a MSc in statistics.

More on this later.
But is statistics special? Does it demand special rules?

Of course not.

Mistakes are made in all branches of science. People with poor credentials do crappy work in all fields. More importantly, people with good credentials also do crappy work in all fields.
For instance, in biomechanics I have been very critical of some newish ideas (force velocity theory & force velocity profiling). I am of the opinion that the mistakes that I perceive are a large part due to the authors not having sufficient mathematical knowledge.
Does this mean that I think you should only be able to publish biomechanics papers if you have a maths degree? Of course not.
The philosophical reasons why science should be an open society without authority are many. If you are interested, this is one of the key messages in my book Subvert (sorry for the plug). http://geni.us/Subvert 
The other main suggestion in the paper is that sports scientists should learn some more statistics. This is all very well, but does mean that they would then learn less sports science. Again, we see that the authors are biased towards the primacy of statistics as a discipline.
So my main concern here is that the authors advocate for the statistics police, and that this would be a terrible thing for science. If you don't think this is what they are advocating for, I draw your attention to the places where they argue that statisticians should vet ...
... methods before sports scientists are allowed to use them. As an aside, I don't understand how this is practically workable - who gives the stamp of approval to say that a technique is now "vetted" and what happens if there is dissent?
Firstly, it is worth talking about preprints. These are normally published in order for an author to get some feedback prior to submitting to a journal. I have never submitted this work to a journal and have never intended to. Instead, it is a discussion piece.
Essentially, publishing a preprint invites comment, and personal communication with the authors. Ironically, despite the fact that the authors argue for greater collaboration, none of them have ever taken the opportunity to engage with me on their concerns.
Secondly, their critique of my paper gives away their bias. Essentially, they reveal that they expect deference to authority rather than mature debate on specific issues.
I appreciate that the purpose of their paper is not to discuss PCA. However, they make 5 specific critiques here of which 4 are simply calls to authority.
1. Does not centre the data conventionally:

i.e. previous authors haven't done it like this
2. Interprets the resulting scores as loadings:

i.e. previous authors haven't done it like this
3. An expert on Twitter says it is bad

(they didn't, but they did say that the paper wasn't helpful)
4. It's new:

(it's not, people have been doing it this way in motor control research for at least 25 years. Many authors do it this way in a wide variety of fields.)
There is only one specific critique. This is taken from the aforementioned Twitter conversation I had with the expert - that the method "violates the independence assumption of PCA". This is a vague allusion to theory, but is neither understandable to the reader or easy to rebut
If I was less charitable, I might suggest that the authors are using technical sounding jargon to convey authority, but without any deep or substantial critique of the method.
So the critique of this paper is: 1) this is different; 2) some bloke who knows what he is talking about said it is bad (but to remind you, he didn't); and 3) err, theory. This is poor scholarship.
Again, the authors want you to trust their authority. No arguments are made here.
Finally, let's go back to the definition of who is allowed to comment on statistics or vet statistical methods. Clearly, the implication of them including my work as a case study suggests that they think I am not qualified.
Certainly, by their definitions I am not. I am not affiliated to a statistics department and I don't have an MSc in statistics.
However, I do have an undergraduate degree in mathematics. And if I was setting out the criteria for expert status that would be first on my list (see how tricky this is).
Please remember, I am not claiming any authority based on my qualifications...
In contrast, many of the authors on this paper have undergraduate degrees in sports science. I'm probably biased, but I would say that an undergrad degree in maths, trumps a master's degree in statistics. 😈
I also have a PhD in computational bioengineering and I teach an MSc module in statistics. Why are the authors denying me a valid voice in a statistics/mathematics debate?
And so we get back to the problem of authority. Science doesn't work if some people have higher status than others. For this reason, this paper is deeply dangerous.

END.
You can follow @dr_jump_uk.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: