Who the hell thought that GPT-3 could be used in a healthcare setting and more importantly what about the way we talk about language models gives people that idea? https://twitter.com/futurism/status/1321191940784836609
This article is poorly sourced, though. I can't tell if the people running the "experiment" were doing it to demonstrate that language models aren't all that (duh) or if they thought maybe it could be used that way?
Here's the original post from http://nabla.ai  and it's a little hard to tell what they expected to find: https://www.nabla.com/blog/gpt-3/ 
Among other things, they say "In practice, this means the model [GPT-3] can successfully understand the task to perform with only a handful of initial examples."

Spoiler alert: it's not understanding anything.
Also LOL @ "the whole Internet, from Wikipedia to the New York Times". Uh guys, there's a whole lot more to the internet...
"Also,there is no doubt that language models in general will be improving at a fast pace, with a positive impact not only on the use cases described above but also on other important problems, such as information structuring and normalisation or automatic consultation summaries."
So yeah, they aren't going at this from "Of course no language model could ever be used in this way" but rather testing to see if GPT-3 is "there yet" along some imagined trajectory.
They do note warnings from OpenAI not to use GPT-3 for sensitive, life-and-death scenarios like health care, but don't seem to have taken from the OpenAI docs anything like a realistic understanding of what the tech is.
You can follow @emilymbender.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: