Oh for the love of Cthulhu, I had plans for this morning, but apparently we're doing this now.

Gather round children, for a thread on deeply irresponsible "activism". https://twitter.com/CatrinNye/status/1194159004689281024
So, these #deepfakes are made by @FutureAdvocacy, a "think tank" aiming to "promote responsible AI policy".

You can watch them in the thread from the quoted tweet above. And they are in and of themselves a danger to democracy.
Who am I to be telling you this? I'm the person who's watched a substantial chunk of the existing porn #deepfakes on the internet so you don't have to. (Paper forthcoming in Porn Studies, like next month.)
Now, if you watch @FutureAdvocacy's #deepfakes, you might note several features in the material:

- They start as straight-up videos of Corbyn and Johnson endorsing each other for PM.
- They then move on to saying they are #deepfakes.
- It's only at that point that a watermark with @FutureAdvocacy's logo appears in the top right-hand corner.
- Both Corbyn and Johnson use a bunch of their respective catch phrases, making them sound like word salad, which is presumably what is meant to clue us into the fakeness early on in the video.
Except: Boris Johnson is our Prime Minister, and *he always sounds like that*. He is beyond parody, making the rest of politics beyond parody, and your Boomer mum sharing random videos on Facebook definitely can't tell the difference.
Because there is no visual indication in the portion of the video where they endorse each other that this is in fact a fake, it is *extremely* easy to pick up these clips, cut them down to just than one portion and release them to the Boomer Facebook hellscape to do their worst.
And of course, even once @FutureAdvocacy's logo appears in the corner, who the heck knows who @FutureAdvocacy are? Their name is so generic, this might as well be some kind of legit political ad.
Now let me tell you why @FutureAdvocacy should have watched more porn #deepfakes before they pulled this stunt. ( @FutureAdvocacy, hmu for my consultancy rates.)
Porn #deepfakes communities are *extremely* careful about indicating that the material is fake. These people don't want anyone to stumble across their videos and think they're real.
They make sure of this in a whole bunch of different ways, depending on the platforms they're using for interaction and distribution of #deepfakes.
Video metadata (titles, tags, etc.) prominently feature word like "fake" and "NOT [celebrity name]".
Most importantly, and the thing that @FutureAdvocacy should have paid attention to, a substantial proportion of porn #deepfakes have watermarks *throughout the entire video* marking them as fake.
"FAKE" stamps, the URL of one of the popular sites, etc. generally positioned in a way that it would be hard to crop or otherwise remove them without losing what's interesting about the video.
With these porn #deepfakes, there's no mistaking them for real, or editing them to pass them off as real. @FutureAdvocacy's videos? Yeah, not so much.
So congrats @FutureAdvocacy, either you fucked up or this was your goal from the start, but either way, "responsible AI policy" this ain't.
This thread is doing some numbers. See here for how you can support my work: https://twitter.com/elmyra/status/1163771121080184833
You can follow @elmyra.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: