The first of what is sure to be many labor protests against artificial intelligence has already begun. It started with Hollywood writers who wanted to make sure they won’t be forced to work with — or be replaced by — AI tools like ChatGPT. But it has already grown beyond that. And it’s not hyperbolic to say that what’s happening in Hollywood right now will have broad ramifications for, well, all of us.
The current Writers Guild of America strike is ostensibly about streaming residuals. The bulk of the WGA’s proposals, as of May 1, are calls for clearer and more fair budgets and compensation for the TV shows and movies that make their way in some form to various streaming platforms.
But nestled down at the end of the last page of the WGA’s two-page proposal sheet is a section called “Artificial Intelligence.” It is the first large-scale attempt by a labor union to pressure an industry to regulate and, in some cases, ban the use of AI as a replacement for workers. And it will assuredly be replicated by unions across the country as this technology becomes more prevalent and, if current trends continue, more advanced.
The WGA is requesting that the Alliance of Motion Picture and Television Producers (AMPTP) ban the use of AI for writing and rewriting any source material, as well as its use as a source material of its own, and that no AI material be trained on WGA writers’ work. So far, the AMPTP has rejected the WGA’s proposal and merely offered to hold “annual meetings to discuss advancements in technology.”
And if negotiations completely fall apart between WGA and AMPTP, the Screen Actors Guild-American Federation of Television and Radio Artists, SAG-AFTRA, has already voted unanimously to join the strike.
The fears that these unions are reacting to, that studios could replace human writers in a writers room or human actors on a movie set, aren’t hypothetical. According to a recent New York Times piece that asked “Will a Chatbot Write the Next Succession?” a recent Netflix contract reserved the rights to simulations of actors’ voices “throughout the universe and in perpetuity.”
What was once solely reserved for the realm of science fiction is now very real. And it won’t just impact pop culture. We are barreling dangerously close to a world where AI has not only put hundreds of millions of people out of work, but also replaced key services with unthinking automations. And if workers can’t draw a line now, while this technology is still in its infancy, there might not be another chance to wrestle back what we’ve lost. Hollywood is just the front line.
The big unknown, however, is whether or not audiences would even notice if AI tools started producing their favorite TV shows. Technologist and activist Cory Doctorow told Polygon you’d notice pretty fast.
“I think it’d be pretty immediate,” he said. “I’m not clear that the studios could do it; like, they might try for a while, but they’d be like rolling the dice that the U.S. Copyright Office would reverse itself on the copyright status of materials produced by machine learning models.”
Which is another big question about all of this. In February, the U.S. Copyright Office ruled that you can’t copyright purely AI-generated material. If a company wants to copyright anything that comes out of an AI, it has to prove a human altered it enough to be original.
“If it becomes the case that the way you make a new Pixar movie is by typing some prompts into a keyboard, and then going down the street to the smokehouse for a three-martini lunch and then coming back, and it’s there on your hard drive,” Doctrow said, “then all of a sudden people are like, I can copy that Pixar movie and sell it to anyone I want. Because it was made by a bot.”
But to understand what these tools are even capable of and why it’s so hard to figure out how they fit into our existing legal frameworks, you need to know a bit about how they work. When we talk about AI in this context, we’re really talking about a kind of AI called generative AI. Specifically, we’re talking about large language models.
The most popular large language model right now is ChatGPT, which is owned by OpenAI and runs on a generative pretrained transformer, or GPT. Basically, it’s a whole lot of text taken from all over the web, which it then pulls from to respond in ways that seem naturally human. Currently, ChatGPT is running on GPT-3.5 and GPT-4, and it’s also integrated into the AI interface for Microsoft’s Bing search engine and Edge web browser.
Though you’ve probably heard that GPT language models are — or could become — sentient, you don’t really have to worry. They work the same way the autocomplete function on your phone works, except they’re using massive chunks of the internet to reply to you. But generative AI can’t generate something it doesn’t know.
For instance, ChatGPT still currently has a “knowledge cutoff” of 2021. There have been instances of current events slipping into its data set — it seems to know that the queen of England died — but for the most part it still thinks it’s 2021 when you talk to it. When an AI doesn’t know something but is pushed to answer a question about it, it will “hallucinate,” or spit out nonsense errors. Users on sites like Reddit will also often share ways of “jailbreaking” these chatbots so they can answer questions that are banned by their terms of service.
Meanwhile, Bing’s version of GPT-4 is actually connected to a live feed of the internet. It can summarize current events easily — though, when it first launched, it threatened journalists that were writing adversarial pieces about it and begged for users to kill it. A universal reaction to using the internet whether you’re man or machine, apparently. (It’s fixed now.)
Though GPT-4, OpenAI’s newest language model, is still only a few months old, we’ve already got a good handle on what it is and isn’t good at. It’s pretty bad at creating anything. If you ask it to tell you a joke or write you a song it will, but they’re pretty bad. It’s the same result if you ask it to write, say, a basic scene of two people having a meet-cute in a romantic comedy. But what it is good at is summarizing huge amounts of data.
One user asked it to read the entirety of J.R.R. Tolkien’s work on Middle-earth and then asked the AI if there’s any evidence that people in Middle-earth poop (there isn’t). Another user asked it to summarize — and make a diagnosis based on — his dog’s blood charts (the diagnosis was largely correct). And back in February, a Twitch stream called Nothing, Forever launched, which used GPT-3 to endlessly generate Seinfeld scripts that would then be “acted out” by AI 24 hours a day, seven days a week. The scripts were mostly gibberish, but the channel was a huge hit… until its content became transphobic and its creators took it down for maintenance.
The trainability of these models is another, thornier risk for creatives trying to regulate this technology. Doctrow said that if a right to train an AI were created — as in, if suddenly writers had a legal right to say who could and couldn’t train an AI on their writing — that right could become a demand from prospective employers.
“All the employers will demand that you assign that right to them as a condition of working for them,” he said. “It’s just a roundabout way of saying that large corporations who are in a buyers market for creative labor will be the only people in a position to build models that can be used to fire all the creators.”
But exactly how good is this technology? Well, this month a Disney fan blog wrote a story titled “An AI rewrites the Star Wars Sequel Trilogy. It’s more coherent than Disney’s.” The AI was asked to “pretend that Disney did not create any more Star Wars movies after Return of the Jedi” and then imagine the plots of movies that George Lucas would have created instead of the sequel trilogy we got.
It came up with three movies: Episode VII: The Force Unleashed, Episode VIII: Rise of the Sith, and Episode IX: Balance of the Force. It even described a pretty tight narrative arc across the three movies, featuring a trio of main characters, including “a young Jedi named Kira,” “a former stormtrooper named Sam,” and “a wise old Jedi named Ben.” Honestly, the AI’s new sequel trilogy sort of reads like the one that actually happened, but it moves the discovery of an evil Sith home world and the reveal of a Sith master villain to the second movie, not the third.
The AI also offered up casting for the main characters, descriptions of new planets and vehicles, and even took a stab at writing a scene where Luke Skywalker, Princess Leia, and Han Solo show up for cameos. Is any of this perfect? Absolutely not. Is it good enough for a human to potentially clean it up for actual production or publication? No question.
Users have found that generative AI chatbots are particularly good at brainstorming genre entertainment because there are more agreed-upon tropes to mix and match. But, as the Hollywood Reporter pointed out last week, it could also be used to cross the picket line during the strike for long-running shows like Law & Order. The bigger the number of scripts for the AI to summarize, the more novel ways it can remix them.
But, also, let’s be clear about what we’re talking about here. A generative AI like GPT-4 can’t create anything new. It can only reorganize what already exists into new combinations. A film industry where chatbots are allowed in writers rooms — or allowed to replace them entirely — is an industry that has quite literally run out of ideas.
But Hollywood isn’t the only sector of the American workforce in the midst of a generative AI upheaval. IBM has completely paused its hiring, and one executive at the company recently speculated that almost a third of its employees could be replaced with a chatbot of some kind. And retail isn’t safe either. Wendy’s is currently testing a drive-thru chatbot as we speak. A recent Bloomberg article estimated that at least a quarter of the American workforce will have their jobs impacted by AI in the next five years.
And there are non-unionized creative industries that are already beginning to flirt with how to use these tools in a writers room. Ashley Cooper, a narrative lead and video game developer, told Polygon that she was approached by an indie video game studio recently that was very interested in using AI to write scripts.
“An indie studio I’d done some work for a few years back emailed me asking, ‘We are looking for a writer to use some AI and get us some back-and-forth dialogue,’” she said.
Cooper said that the games industry is already pretty bad at hiring new creatives, and she fears that using AI could destroy ways for young people to get a foothold.
“At its core, [AI] doesn’t exist to make the lives of writers easier; it exists to minimize a studio’s need for writers,” she said.