currently running a 300 person study on prolific which involved a screening survey (and attention check) filtered on workers who have a 98% approval rate or higher. how is it going, you ask????
so far from manually reviewing the responses twice a day every day, I've flagged 22 participants IDs that are all liked to around 7-10 people who have at least two if not more accounts. One person has AT LEAST SIX different Prolific accounts.
these are just people I can catch because they're submitting duplicate images (not even bothering to change the name) as part of the task.
when I catch them, I reject or remove them from the study. rejecting on prolific involves typing an individual email to each person you reject and then dealing with MULTIPLE messages like these:
I have read thousands of open responses from prolific workers. I know a lot of them are people trying to make ends meet and this is a great way to do that. but if we can't get good data, academics will leave Prolific like they left Mturk, and workers will be left high and dry.
this is a collective action problem because the effort to screen responses and deal with arguing over rejections means most people running studies approve everyone and drop responses rendering the approval rating useless.
there must be a better way to make sure that our online samples 1. are who they say they are 2. get rewarded for good work and 3. that we can continue to use these shared research pools. if you have ideas or better platforms, I'd love to hear them. /fin
You can follow @ericarbailey.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: