I haven't done anything particularly novel with GPT-3 yet, but it's quite interesting to see it in action as a simple way of querying a topic. I asked it to write me a bio, which is ultimately false, however false in ways that suggest very clever inferences
in the bio it roughly gets my area of work right, and my "band" (an album name made by the band WIRE) is made up of "band members" that one might commonly find covered in WIRE magazine.
I asked it a simple question about techno, and different phraseologies turned up very different results
In both cases the structuring of response is pretty competitive with the kind of arguments one might expect from an undergrad term paper responding to a rather boring question. Mightily impressive, and also hints at an important takeaway from this kindof work
Namely that tools like this aren't going to replace journalists or writers worth their salt, if anything the prevalence of tools like this challenge us to value great writing, great questions, and great responses, much more
The API access rightly implores beta testers to factor that bias within the system is present, and even flags 'toxic' responses, which is a nice touch.
and from first glance it occurs to me that it will be an interesting challenge to attempt to weight training on some articles over others, lest the responses reflect 100 false articles over the 1 really well researched article, for example. So there is a future for editors too :)
I asked it to give me a bio of Holly, and again you can see some interesting inferences. 'Audentity' sounds like an album name that someone joking about Holly would call an album by her - a lot of potential with this thing for making jokes actually
reminds me of the old data plays I use to make, where I would write screenplays based on small bits of personal info I knew about the audience. This is golden for that kind of creative work, as like joke writing it is great at building logical scenarios from a punch line source
so in a way my first experiences playing with it confirm another point, that tools like this will be great as writing assistants, or ways to quickly audition different potential scenarios around a human made theme/prompt
Even when given a random prompt, it is really impressive how it readjusts to frame things into a convincing review. This one was a favorite, although it drifts off in context the first line is voiced in a way you would absolutely see in an article
With other prompts, it does a great job in forming a plot and adding suspense. Gets gory pretty fast - this response was labelled 'unsafe' and 'toxic' pretty quickly by the system (although seems pretty benign)
In line with @kaleidic thoughts about making it harder to grade papers for teachers, I fed it a paragraph from @RadxChange "Data Freedom Act". What follows, with a few edits (the car stuff), would make for a convincing student paper on the topic. This is a big deal!
I ran some of the sentences through google to see if the phrasing had been lifted rote from anywhere, and it had not. Again, a clear call to ensure teachers are asking more conceptually complicated questions :)
Next test, I fed it a few paragraphs from my Protocols talk to see what it came up with. Really impressive, and even though the arguments made don't represent my opinions, they are clearly voiced in a style close to mine, and relevant to the topic

https://medium.com/@matdryhurst/protocols-duty-despair-and-decentralisation-transcript-69acac62c8ea
more fuel for the argument this might work out to be a really impressive writers tool. It can ingest large passages of text and create convincing, if simplistic, conclusions based on contextual and stylistic factors. Blown away tbh.
I fed it two paragraphs from a different article I wrote. The summary responses sound convincing enough. GPT-3 also invented an academic at the University of Southampton :)
You can follow @matdryhurst.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: