This @guardian article was written by GPT-3. Sort of. They generated 8 versions and spliced the best parts of each run together. https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3">https://www.theguardian.com/commentis...
GPT-3 is impressive, but this editorial intervention causes some issues.
1st, it’s hard for readers to judge exactly how good GPT-3 is. Is it more coherent because of splices? The text has coherence issues contradictions. Would it be more or less coherent without edits?
1st, it’s hard for readers to judge exactly how good GPT-3 is. Is it more coherent because of splices? The text has coherence issues contradictions. Would it be more or less coherent without edits?
2nd, on the choice of content: AI convincing humans it is harmless.
Those who don’t know much about GPT-3 might be led to believe that the system in fact has this belief. But Guardian staff could just as easily chosen the opposite...
Those who don’t know much about GPT-3 might be led to believe that the system in fact has this belief. But Guardian staff could just as easily chosen the opposite...
…only by reading the editorial notes at the end can one get a sense for how much of a role Guardian staff had in the creation of the piece. This is not a knock on GPT-3, it does what it was asked. There is no effort to educate readers on what GPT-3 is doing.
Case and point… https://twitter.com/spignal/status/1303297724037689344">https://twitter.com/spignal/s...