You know that feeling when you expose something personal about yourself and it is met with a wall of gut-wrenching rejection?
It was last fall that my grandmother found out I was making art with AI. That this information reached her in the first place was not by my own volition - I know who to pick my fights with. We were having lunch when she brought it up, berating me for wasting my artistic talent on machines. I tried showing her some of my art, attempting to explain myself conceptually, but she wasn't willing to listen. I hated myself for letting her make me feel so deeply ashamed of something I was excited about.
That's not to say I don't understand her concerns: AI anxiety is a real phenomenon. We are facing many huge and unanswerable questions about how AI might change the fabric of our society as its effects ripple outward. But as a young-er person, I feel more optimistic than anything else about the potential positive impacts of this technology. From medical and scientific research, to augmenting our personal abilities, to giving us more time in the day, and yes, even to art, the possibilities are tremendous.
I’ve been making and posting my art online as Tinyrainboot for around two and a half years now. Back then, I started a project called “Internet Diary,” where I used AI to express ideas or excerpts from my daily experiences. I didn't have the energy at the time for my usual creative outlets, such as painting and writing. This was something different, something exciting, and above all, something I could do surreptitiously at my work desk. Plugged into Stable Diffusion, my thoughts became prompts, generating weird and unexpected visual outcomes – and so began my fascination with AI art.
Those of us who interact with this discipline on a daily basis often tend to forget just how new it all still is. It's been a mere 3 years since DALL-E came out, which exposed text-to-image technology to the masses for the first time – in turn raising many questions about its implications for the future of our society and the careers of those in creative industries.
I'd like to contextualize this present moment by briefly discussing how we ended up here.
During a recent talk I gave at GenAI Zurich, upon which this article is based, I went into the history of computer and AI art and gave a nod to the pioneers whose footsteps went long before: Vera Molnár and Harold Cohen, for instance. There are fantastic resources out there that can summarize all this better than I can, such as this comprehensive timeline by Le Random.
For the trajectory of the AI art space in particular, though, the year 2014 marks a pivotal moment.
Generative models have existed for decades already: a generative model learns a dataset's properties and then is able to create new data that statistically fits in with the originals. But in 2014, Ian Goodfellow developed a new approach to generative models: generative adversarial networks (GANs).
The revolutionary component is this: GANs pit two nerual networks, or machine learning algorithms, against each other. One is the "generator," attempting to generate an output that matches a collection of examples, while the other is the "discriminator," attempting to distinguish between the real dataset and the outputs produced by the "generator." Through many rounds of this competition, the fake detector pushes the fake generator to improve. In early iterations, Goodfellow's GANs were used to generate simple images of handwritten characters, faces, and even quasi-photographic scenes.
In 2017, this technology took another leap forward, when a project called CycleGAN demonstrated how GANs could be used to modify images, such as converting an image into the style of a specific painter or adding zebra stripes to a horse. See where this is going?
This technology was used by OpenAI's DALL-E, which launched in January 2021 and, soon enough, brought text-to-image to the public's attention. Suddenly, the most bizarre combinations of ideas could be combined visually, brought to life by an algorithm.
Today, most commonly-used AI image generators use diffusion models, rather than GANs, for higher-quality outputs. They use forward and reverse diffusion to add and remove noise from an image. Diffusion models use an iterative refinement process and are less likely than GANs to suffer from "mode collapse," which is when the algorithm gets "stuck" creating outputs that are repeated or highly similar.
Particularly due to a lack of transparency around how image-generating models have been trained, AI art has faced harsh criticism and been steeped in numerous controversies.
The collective behind Portrait of Edmond de Belamy, the first AI artwork ever to be auctioned by Christie's in 2018, faced backlash when it emerged that they had used the code and dataset of another artist, Robbie Barrat, to generate the work. A slew of ethical, moral and legal questions are intertwined with the discipline.
But does AI really represent the end of art as we know it?
When the camera was invented, some declared it the end of art, arguing that since taking a photo required less effort and skill than painting, it was the device, not the human, that was responsible for the final image. – James Bridle, The Guardian
I know, I know, this parallel has been beaten to death already. But I believe that this comparison glosses over an important difference between AI and the camera. What I want to point out is that we face a fundamental lack of alignment when it comes to the general public's understanding of what AI art is and how image-generating algorithms work.
Researchers at the Postdoctoral Institute for Human-Centered AI at Stanford University ran a controlled, randomized study, and found that half of participants saw AI simply as a tool, while the other half viewed it as an autonomous agent, with its own belief and intent.
One of the challenges we face here is the anthropomorphization of AI. We say that AI “hallucinates,” or “dreams,” applying human characteristics to the technological process by which algorithms calculate their outputs. Studies have revealed that anthropomorphization affects trust, thereby creating an obstacle for the accountability and governance of AI systems. In short – it's complicated, and better understanding of how AI works will be needed to get us all on the same page.
Bridle goes on to say,
There is no true originality in image generation, only very skilled imitation and pastiche – that doesn’t mean it isn’t capable of taking over many common "artistic" tasks long considered the preserve of skilled workers. – The Guardian
It's undeniable that many jobs in creative industries will be affected by AI. Jobs will be lost, jobs will change, and new jobs will be created. If you're a product photographer or graphic designer, you have certainly already seen a significant impact to your industry brought on by AI.
But I have an argument to make here, which is that this bland summarization of what AI art is not only glosses over the human component, but also fails to recognize that art is often not just about the final image.
When we instead view AI art as a form of conceptual art, it becomes much more interesting in its marriage of man and the machine. The idea behind the work and the process to create are more important than the outcome – and this is where I believe our generation’s artists will find new ways of pushing the boundaries of creation.
If you consider the whole process, then what you have is something more like conceptual art than traditional painting. There is a human in the loop, asking questions, and the machine is giving answers. That whole thing is the art, not just the picture that comes out at the end. You could say that at this point it is a collaboration between two artists – one human, one a machine. And that leads me to think about the future in which AI will become a new medium for art. – Ahmed Elgammal, director of the Art and Artificial Intelligence Lab at Rutgers University
There’s a lot of fear around AI and how it will change our status quo. The rise in AI-generated content on social media has even been called the “enshittification” of the internet.
But what a lot of people tend to forget in their fear of change – in their fear of the new – is that we, humans, are the key component. We define and contextualize what is interesting to us. Justin Hanagan from Stay Grounded draws a parallel I find very interesting. He compares today's AI debate with the first time a human chess master was beaten by a computer in 1997:
What is interesting about a chess-playing computer is not that it’s good at chess, it’s that it exists at all. The interesting thing about a chess-playing computer is that some former tree-dwelling primates arranged tiny bits of metal and silicon in such a way as to coerce the universe into playing a game better than any other tree-dwelling primate could dream to. – Future Grandmasters of the Attention Game
When is the last time anyone cared about watching a computer beat a grand master at chess? Or watched two super computers fight each other at chess instead? Two-plus decades?
Humans care if something is interesting, and novelty can only be interesting for so long once there are no humans involved.
Let me reiterate: the human component of AI art is what makes it valuable. So could it be, that rather than witnessing the death of artistry as we know it, we are witnessing its modern renaissance? Instead of focusing on aesthetics, we are being given a chance to redefine the meaning of art and where we can take it. Shouldn’t this push creatives to try even harder? To find the boundaries, and new ways of breaking them again and again?
I believe that we are only just beginning to uncover what we can create, empowered by new tools and our evolving relationships with them. Exploring where this human-machine dialogue takes us is the most interesting part about AI art.
Naturally, we can't say yet how AI will impact the longer-term creativity of our society, or how it might affect our brains, thinking patterns or learning styles.
And all I can say now is that AI certainly hasn’t killed the artist yet. I have no crystal ball, though – maybe it still will.
♡ tinyrainboot