ok but the actual story is much more interesting imo

it appears this was "produced" by an AI, but it also exists verbatim elsewhere as written by a human, so it was probably picked up in the training data and then spat back out unaltered https://twitter.com/JimothyBurg1ary/status/1297594069351370752
this is a serious problem* with "AI" used to "generate" "content" (by throwing stuff at a neural network and then asking it to spit something back out) and i don't really see anyone talk about it

* i don't know how serious it is but let's pretend it's dire for theatrical reasons
You See, neural networks are not AI in any useful sense. they're statistical models. when you train a neural network (especially naïvely), it breaks the input data into little pieces and remembers which kinds of pieces were next to each other
(disclaimer: i haven't worked directly with this technology so this is all a casual gleaning)

that works decently well for, say, photos of humans. people tend to look at cameras so you have a lot of images with the same framing, with eyebrows next to eyeballs next to noses
if you cram that data into a neural network and ask it to generate a face, it has a decent chance of picking an arbitrary eyebrow, eyeball, nose, etc. and inventing an original person. though it's just playing mr potato head
but we look at this and we see a mysterious computer program "making" something and we ascribe all kinds of human impulses to it, because we are children and like to play make-believe with inanimate objects
for example, if you ask a human to /invent/ something, they will (probably) make at least an attempt to not respond with something that they know for a fact already exists

computers don't care. in fact they already forgot the data, they just remember the pieces
so if it decides to start with A, and it's only ever seen A next to B, and it's only ever seen B next to C, and it's only ever seen C next to D... then you might get out ABCDEFG and go "wow amazing it came up with the alphabet all by itself"
but no, it didn't do that. it saw someone else do that and it couldn't figure out anything else to do it. it's not a sign of intelligence; it's exactly the opposite

and the messier your training data is, the less similarities within it, the more likely this is to happen
and if you're looking for unstructured, chaotic, heterogeneous, unpredictable data

well i can't think of anything better than /internet jokes/
what is a neural network going to do with that joke? pair it with another quip about turning off a fan? how many of those do you think it started with?
a particularly big problem here, as mentioned by @​jseakle as well, is that this launders ownership

who wrote the joke i QTed? as far as the poster knows, /nobody/. it just came out of a computer. but it came out of a computer that had seen a human being say it elsewhere
which brings me to: This Fursona Does Not Exist, a website that produces an endless assortment of furry avatars "invented" by an "AI" https://thisfursonadoesnotexist.com/ 

but this has the same problem on steroids. the input data isn't photos of humans, who are relatively similar; it's art!
it's not art by a single artist. it's not art in the same style. it's not art of the same subject. it's a massive, scattered assortment of artwork of different species, in different styles, in different poses, using different color schemes
so for any given avatar this thing "generates", what are the odds that it went

"i think i'll start with this bit. hm, i've only seen that once before, so i only know how to continue it in one way"

and produced a near-exact duplicate of one of the input avatars?
but this thing is billed as "AI-generated". so how many furries are now running around with avatars from this website, confident that "nobody" made them, that they're completely original?
and meanwhile we have clowns who should know better, predicting entirely AI-generated games inside of N years. this is snake oil.

making software that's /designed/ to produce original output (and actually /does so/) is fucking hard to write and involves a ton of human input
yeah you can get an essay out of that one text generator. cool. but do keep in mind
1. it had //millions// of pages of input from formal sources like newspapers and encyclopedias, all written to a similar style
2. how much of its output is truly original? have you checked?
the funny thing is that we already saw a lot of this stuff happen 20+ years ago with markov chain bots in irc. sometimes the bot would say something remarkably poignant and, oh, no, it's basically repeating something a human said earlier because it started with a rare word
but i think i've mentioned that before so i guess i'm no better than an AI
oh yes this is a thing that grates on me. i believe there are multiple cases of training an "AI" on a bunch of stuff scraped from the web, then //selling the result//? like excuse me https://twitter.com/0x2ba22e11/status/1298939991339532289
anyway neural networks can do some clever stuff, but an awful lot of it boils down to a combination of
1. we really want to believe computers are magic geniuses rather than idiot rocks
2. humans see patterns in noise
3. math party tricks like 1089 (ask wikipedia)
another disclaimer: my rough description of how NNs work here is really more like how markov chains work; much of the point of NNs is that they can do much more sophisticated kinds of pattern-matching, but it is still pattern-matching
You can follow @eevee.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: