Microsoft is imagining a world in which a game developer can sit down at their computer, pull up an artificial intelligence-powered design “Copilot,” and enter a brief prompt to quickly produce everything from scripts and dialogue to quest lines and NPC barks.
But there are big questions that remain: What’s the problem these AI tools are looking to solve? For some developers, there isn’t one. AI is a solution for a problem that doesn’t exist, one writer with 10 years of experience in the video game industry told Polygon. (The writer was granted anonymity, as they were not authorized to speak to the press.) And could AI actually create interesting content, when it can only really retread data it has scraped? Video game executives seem to think so; in a survey of 25 gaming executives, Bain & Company found that leadership believes “more than half of the video game development process will be supported by generative AI within the next five to 10 years.” The people who actually make games aren’t so sure.
On Monday Microsoft announced its partnership with Inworld, an AI game development company that’s creating AI tools, including an “AI character runtime engine” that’ll essentially give NPCs endless dialogue and quests in real time. Microsoft says these tools are assistive, a way to “empower” developers in the game design process; Inworld said these tools could “reduce the time and resource constraints” to ship games faster, with “more expansive and immersive worlds and stories,” according to their respective posts on the subject. (Representatives from both Microsoft and Inworld declined to be interviewed for this story.)
Video game studios both big and small have begun experimenting with generative AI tools like the ones Microsoft and Inworld have proposed: Ubisoft announced its Ghostwriter tool in March, used to generate “first drafts of barks,” like when a character shouts “Grenade!” or “Fire in the hole!” in a multiplayer shooter, for instance. High on Life developer Squanch Games used AI art for set decoration and voice acting. CD Projekt Red used AI to recreate a voice actor’s voice after death, with family permission. And there’s only more buy-in expected, as companies like Riot Games and Nexon America look to hire AI specialists to build out their own internal tools.
AI as a concept is nebulous to pin down; it means something different to most people you’ll ask. But it’s safe to say that game developers have used AI before, to create dynamic NPC reactions in combat, for instance. But generative AI is something entirely different, with different applications. “AI in general has been utilized by game developers since the dawn of the medium, but usually in terms of things like enemy behavior or procedural generation,” narrative designer Anna Webster told Polygon, adding that generative AI is doing the same things writers are hired to do — that writers enjoy doing, and are trained to do proficiently.
“What they’re hawking in this case is generative AI. That is, something which can supposedly come up with content to support those working creatively. You know, the thing that humans genuinely enjoy doing,” she told Polygon. “I’m not sure why I, or anyone for that matter, would want to give that part of my job to a machine.”
The only people who are interested in making bigger video games on shorter timelines are in leadership, several game developers told Polygon.
“AAA and AAA-plus is getting really expensive, by virtue of more assets, more content, bigger world,” the writer with 10 years of experience said. “And you need to fill those worlds. Who’s going to do that? It requires an army of designers, an army of writers. If you could somehow cut that aspect of time, effort, and the human beings paid to do that, you’ve got yourself a big money split and money saver there.”
A tool like Inward AI’s Copilot creates more content — quests, lore, dialogue, the stuff a studio already planned on creating but for “cheaper and faster,” as New York University associate professor Julian Togelius put it to Polygon. But Microsoft’s runtime AI does something entirely different and completely new. It adds generative functionality into the video game itself, creating a “forever game” with endless content possibilities. That, he said, has potential for “redefining” how games are made; “This is very interesting and might make completely new types of games available,” he said.
A forever game would capture player attention and hold on to it for as long as possible. AI would make that dream a closer reality — and executives wouldn’t need to spend on human labor. Not to mention the AI could respond to player feedback in real time. While the idea of AI characters is interesting to some developers Polygon spoke to, these tools leave more questions than answers: Do we need a forever game, anyway? Can it actually create a great experience? Generative AI works here, very simply put, by learning from a pool of text data and using predictive text to generate responses to queries or prompts. It can only retread ideas that already exist — it’s what it’s pulling from, without any human spark.
“You’re going to have a small pool of information to draw creativity from. It’s going to start feeling the same. It’s the same gray sludge repeated endlessly,” the writer with 10 years of industry experience said. It’s bad for writers that create the stories for video games, but could end up being bad for the players, too.
The way generative AI learns — with that massive dataset — is an unanswered question, too. As in other industries, developers are questioning whether creators whose copyrighted work was scraped by an AI could claim that AI’s output. “This is not clear, but I think the problem can be solved by new models that are trained on licensed or Creative Commons work,” Togelius said. Webster described the output, trained on copyrighted work or work from artists and writers that haven’t consented, as “the absolute worst kind of chicken nugget” — a piece of work created from blended-up, dubious parts. “[And it’s] a chicken nugget that is in potential violation of intellectual property and copyright law.”
Togelius told Polygon that using generative AI to create dynamic NPCs won’t actually be faster or cheaper, but it has the potential to inspire entirely new types of video games. “If you want to do something like [large language model]-driven NPCs, you’ve got to design from the ground up for that. That’s a much harder way of using AI, but I argue, more interesting.” There are also other applications beyond generating content for generative AI — he pointed to how programmers could use it to generate code, or to learn to code.
Matthew Seiji Burns, who created Eliza, a visual novel about an AI therapy program, has played around with the tools and is open to the possibility that generative AI could be useful in the future. He’s not opposed to it, but right now, he said, game developers are just wondering what problem it’s trying to solve — again, beyond executives looking to make bigger games for cheaper.
“I’ve played with them quite a bit,” Burns said. “We’re interested in it, right? Like, if someone said, ‘Oh, here’s a tool that lets you do your job easier, and makes it more fun and takes away the drudge work, and you can concentrate on being creative,’ that sounds great, right? But we’ve been trying all these tools, and we’re not seeing the amazing benefit the hype is kind of promising.”
These questions aren’t too different from the ones being asked in other parts of the industry, like video game performance. Video game performers, including voice actors and motion capture artists, with Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) are currently in negotiations with a group of video game companies to secure AI regulations, among other things. Sara Secora, a voice director and actor who’s acted in Genshin Impact and Fallout 76, told Polygon that voice actors are looking to be able to consent — or not — to AI being used to replicate their voices.
“The scary thing is people stealing our voices without consent,” she said. “Every single job we’ve worked on for video games or animation, we’ve signed full perpetuity buyouts, which gives [companies] full rights over content, meaning they own it. They can feed it into a machine and do whatever they want with it.”
Some actors signed those rights away years before generative AI was even used in this way. “We did not know this was coming,” Secora said. “Now we have a new language, the [National Association of Voice Actors] rider and other things to help protect us.” And that’s on top of the AI protections SAG-AFTRA is looking to secure for voice actors. (TV and film actors just ended their 118-day strike after reaching an agreement with Hollywood production companies.)
It’s a fight that the Writers Guild of America took on earlier this year after it went on strike for 148 days for a new minimum basic agreement that guarantees regulations on the use of AI, documenting the restrictions and rules with precise language. The new contract isn’t a restriction on the use of AI, but focuses on how AI is trained and how writers are credited and compensated. It’s a guideline that video game writers may look to in outlining how regulations could be imposed on generative AI use in the video game industry, though the industry does not have an industry-wide union like Hollywood writers do.
For now, game developers that spoke to Polygon have a lot more questions than answers. There is one answer, though, that game developers think is obvious: Invest in the developers you have, and bring new voices in.
“Game developers are already some of the most creative people on the planet,” Burns said. “The problem isn’t thinking of new ideas. Every game designer has a million ideas, and many could be right. Saying that game developers could be made more creative by AI is weird to me. If executives truly wanted more creative work in games, they should invest in game developers themselves — the new voices and human perspectives we haven’t seen before.”