There seem to be a lot of misunderstandings regarding Amazon Mechanical Turk @amazonmturk that keep appearing here on Twitter. Here is my attempt at clearing some of them up. A thread. 1/n
"Mturk is unethical because it underpays workers". This is not true. You as an experimenter decide how much you want to pay workers, and Mturk just pays people what experimenters like to pay them. But yes, they could actively encourage higher payment. 2/n
"Still, Mturk should be more expensive, and they should provide benefits to their workers". The vast majority of US workers do this for fun / to make extra money, not to make a living. And as experimenters, you're often supposed to only compensate participants for their time. 3/n
"Workers on Mturk are not representative of the general population". Workers tend to be younger, more highly educated, whiter, and more female than the general population. But this is likely much more representative than your tiny sample of ivy league undergraduates. 4/n
"There is no way of knowing whether these people really are who they say they are". While Mturk is very strict with registrations (e.g. requires social security #), it's difficult to know for sure who takes part. But do you ID your participants every time they come to the lab?5/n
"Data quality at Mturk is really bad because of bots." There is absolutely no evidence there ever were bots on Mturk. Unless you use a highly standardized questionnaire tool, it's difficult / too costly to adjust a bot to your experiment. Mturk also includes regular Captchas. 6/n
When the whole bot story started in a Facebook group, I was concerned and even spoke to the guy who started spreading the rumor, and he showed me his evidence. I told him this was actually evidence for low-quality human results, but the rumor already took off. 7/n
"Mturk data quality used to be good but now it's really bad." True, there are some people who try cheating. But the only study showing a drop in performance was based on post-hoc selection bias (saw a weird effect, made it a paper). Our analyses showed no effect across time. 8/n
"Data quality on platform XY is much better than on Mturk." The only studies comparing Mturk with other platforms have found no difference between platforms or if any better results at Mturk. But there is a lack of recent comparisons. 9/n
"Mturk fees are 40% of what workers are paid, which is much more than other platforms." If you don't know Mturk well, you may end up paying those 40%, true. If you know only a little bit about it, you end up paying 20%, or with certain tricks even as little as 14%. 10/n
But Mturk isn't perfect: The interface is no good for coding studies. Debugging sucks. The sandbox isn't similar enough to the real thing. Mturk might change your code without any notice. Flagging bad workers for everyone is still not implemented. Billing is suboptimal. 11/n
I'll add more to this thread if it comes to my mind, but feel free to comment. Don't @ me because I didn't add references, happy to provide if interested.
TL'DR: Crowdsourcing is a great tool that revolutionized psychology experiments. And Mturk is a great resource for that. FIN
You can follow @martin_hebart.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: