Some thoughts on the Statcheck project
Yesterday, a piece in Retractionwatch covered a new study, in which results of automated statistics checks on 50,000 psychology papers are to be made public on the PubPeer website. I had advance warning, because a study of mine had been included in what was presumably a dry run, and this led to me receiving an email on 26 th August as follows: Assuming someone had a critical comment on this paper, I duly clicked on the link, and had a moment of double-take when I read the comment. Now, this seemed like overkill to me, and I posted a rather grumpy tweet about it. There was a bit of to and fro on Twitter with Chris Hartgerink, one of the researchers on the Statcheck project, and with the folks at Pubpeer, where I explained why I was grumpy and they defended their approach; as far as I was concerned it was not a big deal, and if nobody else found this odd, I was prepared to let it go. But then a couple of journalists got interested, and I sent them a more detailed thoughts. I was quoted...