Two possibilities: GPT-3 is very close to some kind of general intelligence, OR we are very easily tricked by the epiphenomena of thinking.

(...)
My prejudice is the latter, although @MelMitchell1’a copycat experiments gave me significant pause. Those appear to be cases where the machine is developing concepts on the fly.
GPT-3 does push, hard, at the signals we use to determine whether or not someone else is thinking. It reveals the extent to which a great deal of life is, indeed, NPC (I can’t recall whose Twitter feed gave me that insight, sorry.)
I’m with @add_hawk that it may absolutely kill the college essay. Indeed, it may be an amazing demonstration of the truth of speaking first proposed in the Phaedrus.
But (surprise!) I find that GPT-3 gives me greater faith in my own understanding of the nature of intelligence. This is despite (or, rather, because of) the fact that GPT-3 is not, in fact, intelligent.
I’m surprised and rather shocked by this, for two reasons: (1) I always thought my conception of intelligence was a bit too edgy to be correct; (2) I didn’t expect the new supporting evidence to be so clear.
Intelligence is the ability to reflect on one’s concepts—not just to form memories (for example), but to reflect on them, and form memories on that reflection in turn.
This makes philosophy the most pure form of intelligence (I was pleased to see that this plays a key role in @NegarestaniReza’s new book), uncorrupted by pattern recognition.
GPT-3 can not reflect upon its concepts—it cannot (riffing on Brandom’s views) be answerable to its concepts under the sign of reason.
What GPT-3 shows is that an enormous amount of what seems to be intelligent behavior is, in fact, sensitive use of pattern matching.

Some people want to flip this and say this means pattern-matching is intelligence. That seems wrong, however...
Consider the generative models behind http://thispersondoesnotexist.com  It is shocking that they can produce “real” faces and facial expressions—but we don’t thereby start thinking that a friend’s smile isn’t real.
Indeed, precisely because the underlying mechanism is so different (manipulating pixels vs muscles) we think about visual GANs as accounts of how we perceive, not who we are.
A college student’s essay can be a good signal of intelligence when coupled with knowledge of his limitations—there are certain things it’s really hard to write unless either (1) you reflect on your concepts or (2) you have insane pattern recognition on a massive database.
It is, in a way, something like a pornography of thought—infinite amounts of “provocative” text behind which one can imagine lies a mind.
By the by, this is also why IQ is not a meaningful test of intelligence—it tests pattern recognition, perhaps in Raven’s Progressive Matrices, at one or two layers of abstraction.
IQ may work in certain demographics simply because someone who spends time thinking intelligently (I.e., reflexively, about their thinking) will learn certain patterns; that’s a bit like discovering someone likes to read novels by testing their vocabulary.
In any case, each example I get from GPT-3 kind of wrecks me because I think something that (we all know) can’t reflect on its concepts also can’t write something as good as... oh dear, that.
If you want to push this a little further, you might say that we are intelligent because of our limits, not despite them. This fits (I think) with how we rely on not just pattern recognition but recursion—our notions of explanation (for example) are themselves explanatory.
We are constantly using our concepts to make new concepts, for example, something that requires reflection, “metaprogramming” in a free and untyped level-mixing sense.
I don’t think AGI is impossible by any means. But until GPT has some way to reflect—talk about—it’s internal states, in a way that makes that reflection another internal state, I think something is missing.
One way to do it is to plug some kind of dimensionality reduction algorithm on top of its internal states, and make that thing output symbols as well; symbols that go on the stack, and alter the machine’s subsequent states in the same way the word outputs do.
Imagine (e.g.) running an autoencoder on the internal states, and feeding the bottleneck layer in where the words go. That sounds very hard. (Although I’d be very up to see what a dimensionally reduced internal state space looked like for GPT-3.)
Even then that might not be enough. It’s not enough to have memory, or even for your words to be influenced by your “thoughts”—rather, your words and thoughts have to be in the right relationship. Your words have to be answerable to your thoughts, but what that means isn’t clear.
I’m not sure this relationship can be achieved simply by training on text. That’s because we learn to make our words answerable to ideas by others asking us to do so. Teachers, not books (or tests about books), make us smart.
That doesn’t mean that AI isn’t potentially very dangerous. Indeed, if it does kill the college essay, it’s rather ruined an entire method of teaching that was way more cost-effective than the Socratic method and wasn’t, honestly, too bad.
It is easy to imagine GPT-like systems taking down entire internet communities—imagine hooking this up to a Reddit self-help forum, for example, where people write encouraging essays to each other as a way to connect.
On the flip side, if we do take GPT-3’s lessons to heart, it might get us beyond the epiphenomena of thinking, and make us more attentive to ourselves and our own minds.
In the same way we didn’t have a widespread notion of authenticity before we had a crisis of authenticity (driven by consumer culture), perhaps we (as a society) will up our game on talking about intelligence.
We might learn to value learning and adjustment more than rhetoric and winning, in arguments. To value self-awareness and reflection more then facility.
I love this idea. It reminds me a little of using Eliza for actual psychotherapy. https://twitter.com/edsonedge/status/1289388651873308673?s=20
This is the Chater _Mind is Flat_ story—which I’m not sure I buy! https://twitter.com/USyd_Complexity/status/1289387546854580224?s=20
Yes—something long overdue. The value of an intelligence (rather than a pattern matcher) is high, but irregular. https://twitter.com/Mr_Completely/status/1289388639986708481?s=20
My current nightmare application is blending GPT-3, facial GANs, and voice synthesis to make synthetic Elizas that drive vulnerable people (literally) crazy.
What prof has not encountered a bullshit artist like this in seminar? Everyone knows you were on your phone and didn’t do the reading, dude. https://twitter.com/chazfirestone/status/1289395574962233351?s=20
So @RokoMijicUK should be credited with the NPC characterization (to be clear, I disagree with what I think is her implicit claim; to be indistinguishable from an NPC at some point in time does not mean you are “not real”).
You can follow @SimonDeDeo.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: