The internet did a commendable job of mocking NFTs to death, or at least into remission—big game developers like Ubisoft who initially showed interest have mercifully stopped bringing them up—and now some hope that the “make it so uncool no one will touch it” tactic can be used to stunt another trend: the rapidly-advancing AI image generators spitting out flattering, fake portraits of our friends and stills from imaginary David Lynch Warhammer films (opens in new tab).
I think they’ll be disappointed. AI “art” isn’t going anywhere.
In one sense, NFTs and AI art are opposites: NFTs promise that every piece of digital artwork can be a unique and valuable commodity, whereas AI art promises to eradicate the value of digital art by flooding the internet with an endless supply of it. If Jimmy Fallon wants to hoard all those stupid NFT ape pictures, I don’t think most people would care, but the cheap, rapid generation of AI images has made it hard not to see more and more of them. If you’ve used social media over the past year, you’ve seen AI-generated imagery.
And I highly doubt it’s a temporary fad. Where blockchain investing is criticized as pointless waste generation, AI art is lamented for threatening the jobs of illustrators. Everyone can see the value of a machine that turns words into pictures. It’s hard to resist giving it a try, even if you don’t like it on principle. If someone tells you they have a machine that can make a picture of anything, how can you not want to test the claim at least once?
The way we interact with these machine learning algorithms reminds me of the way people tease babies, delighting at their every response to new stimuli and pointing at anything that could be taken as a sign they’ve understood us. When an image generator seems to “get” what we’ve asked for, a pleasantly uncanny feeling arises—it’s hard to believe that a computer program successfully translated a complex idea like “John Oliver looking lovingly at his cabbage having realized he’s falling in love” into an image, but there it is, undeniably on the screen in front of us.
And that’s really what makes AI art so offensive to so many, I think. It’s not just the automation of work, but the automation of creative work, that feels so obscene. Something perceived as profoundly human has been turned into a party trick.
The good and bad news for humankind is that the sleight of hand is easily found: Image generators don’t do anything unless they’re trained on stacks of human-made artwork and photos, and in some cases that’s been done without consent from the artists whose work was used. Indeed, the popular Lensa AI portrait maker frequently reproduced garbled signatures (opens in new tab): the mangled corpses of the real artists who were fed to it.
An early attempt to save AI art from this criticism is easily dismissed, if you ask me. The claim goes that by scraping online artist portfolios for training material, AI art generators are “just doing what human artists do” by “learning” from existing artwork. Sure, humans learn in part by imitating and building on the work of others, but casually anthropomorphizing algorithms that crawl millions of images as living beings who are just really fast at going to art school is not a position I take seriously. It is entirely premature to grant human nature to silicon chips just because they can now spit out pictures of cats on demand, even if those pictures occasionally look like they could be human-made.
I’m cropping these for privacy reasons/because I’m not trying to call out any one individual. These are all Lensa portraits where the mangled remains of an artist’s signature is still visible. That’s the remains of the signature of one of the multiple artists it stole from.A 🧵 https://t.co/0lS4WHmQfW pic.twitter.com/7GfDXZ22s1December 6, 2022
Beyond flattering portraits
What’s interesting about AI-generated images to me is that they usually don’t look human-made. One way the inhumanity of machine learning manifests is in its lack of self-consciousness. AI art generators don’t tear up their failures, or get bored, or become frustrated by their inability to depict hands that could exist in Euclidean space. They can’t judge their own work, at least not in any way a human can relate to, and that fearlessness leads to surprising images: pictures we’ve never seen before, which some artists are using as inspiration.
Rick and Morty creator Justin Roiland toyed with AI art generation in the making of High on Life, for instance, telling Sky News (opens in new tab) that it helped the development team “come up with weird, funny ideas” and “makes the world feel like a strange alternate universe of our world.”
Image generation is only one way machine learning is being used in games, which are already full of procedural systems like level generators and dynamic animations. As one example, a young company called Anything World uses machine learning to animate 3D animals and other models on the fly. What might a game like No Man’s Sky, whose procedurally generated planets and wildlife stop feeling novel after so many star system jumps, look like after another decade of machine learning research? What will it be like to play games in which NPCs can behave in genuinely unpredictable ways, say, by “writing” unique songs about our adventures? I think we’ll probably find out. After all, our favorite RPG of 2021 was a “procedural storytelling” game.
I don’t want Epic to be a company that stifles innovation. Been on the wrong side of that too many times. Apple says “you can’t make a payment system” and “you can’t make a browser engine”. I don’t want to be the “you can’t use AI” company or the “you can’t make AI” company.December 25, 2022
Valid as the ethical objections may be, machine learning’s expansion into the arts—and everything else people do—currently looks a bit like the ship crashing into the island at the end of Speed 2: Cruise Control. (opens in new tab)
Users of art portfolio host ArtStation, which Unreal Engine and Fortnite-maker Epic Games recently purchased, have protested the unauthorized use of their work to train AI algorithms, and Epic added a “NoAI” tag artists can use to “explicitly disallow the use of the content by AI systems.” But that doesn’t mean Epic is generally opposed to AI art. According to Epic Games CEO Tim Sweeney, some of its own artists consider the technology “revolutionary” in the same way Photoshop has been.
“I don’t want to be the ‘you can’t use AI’ company or the ‘you can’t make AI’ company,” Sweeney said on Twitter (opens in new tab) . “Lots of Epic artists are experimenting with AI tools in their hobby projects and see it as revolutionary in the same way as earlier things like Photoshop, Z-Brush, Substance, and Nanite. Hopefully the industry will shepherd it into a clearer role that supports artists.”
It is of course possible to train these algorithms without gobbling up other people’s artwork without permission. Perhaps there’s a world where artists are paid to train machine learning models, although I don’t know how many artists would consider that better. All kinds of other anxieties arise from the widespread use of AI. What biases might popular algorithms have, and how might they influence our perception of the world? How will schools and competitions adapt to the presence of AI-laundered plagiarism?
Machine learning is being used in all sorts of other fields, from graphics tech like Nvidia DLSS (opens in new tab) to self-driving cars to nuclear fusion, and will only become more powerful from here. Unlike the blockchain revolution we keep rolling our eyes at, machine learning represents a genuine change in how we understand and interact with computers. This ethical, legal, and philosophical quagmire has just started to open up: It’ll get deeper and swampier from here. And our friends’ profile pics will get more and more flattering.