It’s not true that there’s nothing new under the sun; there are many new things, it’s just that the old, human things are still with us, and always will be
You can’t really understand things until they are part of your habitual life, and you only assimilate novelty by finding analogies to things that you already know
So new technologies really do give us new perceptions, and enable us to add new understandings to our conceptual framework of the world
When a new technology becomes ubiquitous, we all become philosophers, because what used to be a rarefied abstraction is now a concrete daily occurrence
Lately, we have been discussing a computer program, which is itself a new and incredible thing, a thinking lightning rock, and it can seemingly apprehend language, and speak better than many real people. https://twitter.com/raphamilliere/status/1289129723310886912
Set aside the obvious criticisms and skepticisms of the thing and notice, instead, that philosophy of mind is now close at hand, that our intuitions about these topics are now tested by a real life encounter with a thought experiment https://twitter.com/RokoMijicUK/status/1289002152594214913
The minimalist position on GPT-3 is that it is “just splines for text”. A spline is a continuous path between points in geometric space, usually curved. By analogy, GPT produces continuous(ish) paths between points in linguistic space
The question is whether our own faculty of language is much more than this. It is in one sense, obviously, because our linguistic faculty is tangled and coupled with all of our other senses https://twitter.com/0x49fa98/status/1063560554365603840
All theories of consciousness are vague and bad, but the least vague one is called integrated information theory, and it claims that consciousness may be “just” splines for multiple overlapping sensory paradigms. Intuition is quiet, here
The obvious objection, I think, is grounded in an impoverished view of our senses. We have many more than five; we have our emotions, and our proprioceptions, and pleasure and pain and temperature, sense of direction, and so on
And if I imagine an algorithm that could do for all of my senses what GPT does with only a single sense; if I imagine further that it continuously consumed its own output as one more “input” sense, then I feel there might be very little left to explain
Of course, for some of the senses that I mention, such as emotion, it is not clear how to implement that. Computer science has basically solved the problems of perception; what is lacking is a theory of emotion and desire https://twitter.com/0x49fa98/status/1203432100034039809
But I want to leave that for a moment and return to the question of how machine learning brings philosophy of mind into the practical realm.
It feels obvious to us that GPT-3 doesn't see itself as anything, doesn't see itself at all, because the moments where it does "see itself" are discontinuous, leaving no trace once they have passed https://twitter.com/egg_report/status/1162375857991835652
It may not be horrifying, but it is certainly alien, to imagine small or discontinuous forms of consciousness. We can imagine GPT-3 "perceiving" in tiny fits and starts of awakeness, similar perhaps to the subjective experience of a nematode worm https://twitter.com/0x49fa98/status/1157307742014435330
Even if we subscribe to spiritual views regarding the mind and the soul, we must admit that a dog or a cat is conscious, but then also a mouse, and also lizard, and a bee, and an ant. There is no obvious cutoff. Why shouldn't microscopic life have proportional subjectivity?
The real mistake is to believe that faculty with language, which seems to us to be the apex of consciousness, depends upon our other faculties. This is not the case; there can be no more argument here; we you can always point at the language machine, aha! https://twitter.com/0x49fa98/status/1025738936948150275
Wittgenstein, in On Certainty, says that our basic beliefs are really animal or unreflective ways of acting which, once formulated, look like empirical propositions, when in fact they are atomic; they are the lowest level of our knowledge...
More simply: language is not a picture of thought, it is thinking--the mind itself!. AIs such as GPT-3 are a material instantiation of this Wittgensteinian claim about knowledge; certainly, the linguistic knowledge inside GPT-3 is nonpropositional, the sum total of its knowledge
Does Wittgenstein's thesis survive this experiment? The answer is no, but also yes, because when I say a word, I am invoking mesh of tangled sensory manifolds which may include haptics, optics, osmics, and other -ics also.
Language is a form of thinking, but if it lacks integration from other senses, then it can only occupy uncanny valleys of the mind, because the infinite regress of words can only terminates when it meets a plurality of other senses.
A tenet of postmodern philosophy is that subjectivity, the phenomenological sense of yourself, is constructed by language. A famous idea from Althusser is that merely acknowledging a policeman who says “hey, you” causes you to see of yourself as a subject of civic authority
The above relies on a conflation of subjectivity in the phenomenological sense and subjectivity in the political sense--but is this so wrong? I assure you that power is already creating rules to exercise dominion over AI, to make it a subject of the state. https://techcrunch.com/2020/07/03/we-need-a-new-field-of-ai-to-combat-racial-bias/
And if laws are written to bind GPT-3, then researchers will teach it to model those laws, then GPT-3 will, in fact, realize a kind of polysensory subjectivity.
And in fact the Althusserian model has more to offer when dealing with GPT-3. To the degree it perceives itself, as a linguistic entity, telling it to talk about itself as a subject literaly causes it to perceive itself as a subject, to the degree it perceives.
One of these days, someone is going to figure out how to intertwine GPT-3 with a boston robotics man, and suddenly its schizoid dream ramblings are going to have material and perceptual correlates, and then we'll see how big your soul really is.
You can follow @0x49fa98.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: