Humans out of the loop
“How does GPT-3 work?”, I typed into OpenAI’s playground website.
“GPT-3 (Generative Pre-trained Transformer 3) is an artificial intelligence (AI) system developed by OpenAI, a research laboratory based in San Francisco. It is a large-scale language model that uses deep learning to produce human-like text. GPT-3 is trained on a massive amount of text from the internet and is designed to generate human-like text when given a prompt. This is achieved by predicting the next word in a sentence based on the words that precede it. GPT-3 can generate text from scratch, complete tasks like question-answering, and generate text that is almost indistinguishable from human-written text.”, GPT-3 replied.
Good answer. Makes sense, understandable. It reads like it could have be written by a copywriter. Perhaps GPT-3 just copied it from a website, I thought. So I googled it. I searched for that exact paragraph. But, according to Google, there are no matching search results. That paragraph has never existed before, no one has ever written it and put it on the internet for GPT-3 to copy, it’s new.
GPT-3 used how it works to write an explanation about how it works. That’s a bit of a loop.
Generative AI, whether it’s making text, images, music, or code, doesn’t really create something new. As GPT-3’s explanation says, it has been trained on text found on the internet. It takes all of that information and regurgitates it. This means that it can only create based on what already exists. AI generated images are only possible because there are lots of images on the Internet for the AI to learn from. Same with music. There’s lots available to learn from.
The more generative AI is used, the more AI generated material there will be, and the more AI will learn from it. Eventually, everything on the internet will have been created by AI that has been trained on everything on the Internet that was created by AI. That’s a bigger version of the same loop that GPT-3 used to explain how it works.
But hang on. Isn’t that exactly how human culture was created? Music referencing history, art reacting to politics, writers influenced by other writers, all the way back through human history. Maybe it was the last time anyone did anything original was when some prehistoric person made a meaningful mark on a cave wall. Since then, everything has built on what went before it.
So, using AI to write and to make images and music, is to continue with developing human culture, at an increasing scale and increasing pace, but a continuation nonetheless. It’s like using a calculator to do sums. It makes it easier but it doesn’t change how mathematics works. Generative AI is not discontinuous change for human culture, it’s the natural next step.
These self-referential, self-reinforcing loops of AI learning from what AI created repeating over time, eventually remove the humans from creating human culture. We become the consumers, no longer having input into our own culture. When AI can create a thousand similar images in a second, what hope does one painting that takes a person weeks to create have in influencing mass culture? And when culture is wholly and completely caught in a self-referencing loop that uses what already exists, it prevents anything different from arising.
But what humans do that AI doesn’t, what is absolutely core to human nature, is seeking novelty, reacting against things, or just being plain awkward.
When we think about leaps in human culture, where things have seemed new to us, they have always been as a reaction to what has gone before or a merging of things. Impressionist painters reacted against Romanticism, the mainstream art of it’s day. Jazz music came out of entwining American and European classical music with African folk songs.
Culture builds on what went before, but when humans build culture they do it in messy, tangential, reactionary ways. When AI builds culture it optimises for efficiency, sameness and incremental change.
Taking humans out of the culture creating loop leads to a very inhuman human culture.