A few takeaways on preprints and peer review from a great discussion group at the @turinginst, hosted by @f_nanni @makethecatwise. Thanks for the invitation, guys! 1/n
(a) when we preprint to get feedback, and then significantly improve the paper (which is the point of getting feedback) - what are the chances that anybody who read the first version will also look at the last one? By that time they'll be drowning in new papers.
/2
/2
(b) corollary of (a): scripts to automatically update arxiv citations to the official aclanthology bibtex might lead to us not citing what we think we're citing. From personal experience: two of my preprints this year will be over 50% different when published. /3
(c) providing peer review is not only invisible free labor, but also an ethical minefield: what if you've been working on something similar, but after review can no longer publish it because it'll look like you stole the idea? (thanks @ducha_aiki) /4
(d) in different fields preprints serve completely different functions (apparently in some fields they can be desk-rejected or get scooped!) I'd normally think that the more people think about a common problem the better, but sounds like a one-size-fit-all solution is unlikely./5
(e) there's a TON of projects trying to come up with alternative peer review and publication schemes, which I have never heard of. There's a big listing here: https://reimaginereview.asapbio.org/ Thanks @jessicapolka! /6