My gut tells me that code generation using GPT-3 generates a lot of hype, but is pure gimmickry. Prove me wrong!
The reason is that code generation requires a lot of semantics and those semantics aren't something that GPT-3 can extract. Also, usually, specifications for code like UI requires conversation and arriving at an agreement.
So there's a lot of stuff that needs to go into this to get something that can execute a good conversation to extract semantics.
I suspect why code generation makes for a good demo is that most people can't code and aren't aware of the gap between translating someone's specification into code.
This is also why most people focused on the rubik's cube solving and not the hand dexterity. It's the latter that was a breakthrough, but people don't understand why the ordinary is very hard.
People simply have a distorted understanding of what is difficult and what is not and this relates to how hype is created. The real stuff can't be hyped because most people don't grok it!
Extracting complex semantics via conversation is hard. Generating the code once you have the semantics is easy. Yet people think the latter is hard.
I also think that it is a very small number of people that understand what are the hard problems of AI and what are the easy ones! Which makes sense because we all have the faintest ideas about a theory of intelligence.
You can follow @IntuitMachine.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: