HomeOpinionsIs the video game industry facing an AI renaissance? What are the...

Is the video game industry facing an AI renaissance? What are the impacts?

Imagine playing the classic first-person shooter DOOM, but with a twist: the sport isn’t running on code written by programmers; it’s being generated in real time by an AI system.

This effectively describes GameNGen, a breakthrough AI model recently unveiled by Google and Tel Aviv University researchers.

GameNGen can simulate DOOM at over 20 frames per second using a single Tensor Processing Unit (TPU).

Within that processing unit, we’re witnessing the birth of games that may essentially think and create themselves.

Instead of human programmers defining every aspect of a game’s behavior, an AI system learns to generate your complete game experience on the fly.

At its core, GameNGen uses a kind of AI called a diffusion model, much like those utilized in image generation tools like DALL-E or Stable Diffusion. However, the researchers have adapted this technology for real-time game simulation.

The system is trained in two phases. First, a reinforcement learning (RL) agent – one other kind of AI – is taught to play DOOM. As this AI agent plays, its gameplay sessions are recorded, making a large dataset of game states, actions, and outcomes.

In the second phase, this dataset is used to coach GameNGen itself. The model learns to predict the sport’s next frame based on the previous frames and the player’s actions. 

Think of it like teaching the AI to assume how the sport world would reply to each player’s move.

One of the most important challenges is maintaining consistency over time. As the AI generates frame after frame, small errors can accumulate, potentially resulting in bizarre or unimaginable game states. 

To combat this, the researchers implemented a clever “noise augmentation” technique during training. They intentionally added various levels of random noise to the training data, teaching the model tips on how to correct and eliminate this noise.

This helps GameNGen maintain visual quality and logical consistency even over long play sessions.

GameNGen’s architecture. Source: ArXiv.

While GameNGen is currently focused on simulating an existing game, its underlying architecture hints at much more exciting possibilities.

In theory, similar systems could generate entirely latest game environments and mechanics on the fly, resulting in games that adapt and evolve in response to player actions in ways in which go far beyond current procedural generation techniques.

The path to AI-driven game engines

GameNGen isn’t emerging in isolation. It’s the most recent in a series of advancements bringing us closer to a world where AI doesn’t just assist in game development – it becomes the event process itself. For example:

  • World Models, introduced in 2018, demonstrated how an AI could learn to play video games by constructing an internal model of the sport world. 
  • AlphaStar, from 2019. is an AI developed by DeepMind specifically for enjoying the real-time strategy game StarCraft II. AlphaStar utilizes reinforcement learning and sophisticated neural networks to master the sport, achieving excellent proficiency and competing against top human players.
  • GameGAN, developed by Nvidia in 2020, showed how generative adversarial networks could recreate easy games like Pac-Man without access to the underlying engine. 
  • DeepMind’s Genie, unveiled earlier this 12 months, takes things a step further by generating playable game environments from video clips.
  • DeepMind’s SIMA, also introduced this 12 months, is a complicated AI agent designed to grasp and act on human instructions across various 3D environments. Using pre-trained vision and video prediction models, SIMA can navigate, interact, and perform tasks in diverse games without access to their source codes.

It all points towards a tantalizing future wherein worlds can expand and reply to player actions with realism and depth.

Imagine RPGs where every playthrough is actually unique, or strategy games with AI opponents who adapt in ways no human designer could predict. Worlds that bend and flex to your character’s journey. Dialogue and interactions that look and sound like the true thing. 

The elephant within the room is that the AI technologies driving futuristic gameplay also threaten to displace developers and designers.

GameNGen points to a potentially autonomous future for game creation. Even before we reach that time, AI is already shrinking development teams across the software industry, including gaming.

Optimists argue this shift could free developers from manual asset creation, allowing them to give attention to high-level design and storytelling inside these latest AI-driven frameworks.

For example, Marcus Holmström, CEO of game development studio The Gang, explained to MIT, “Instead of sitting and doing it by hand, now you possibly can test different approaches,” highlighting the technology’s capability for rapid prototyping and iteration. 

Holmström continued, “For example, in the event you’re going to construct a mountain, you possibly can do various kinds of mountains, and on the fly, you possibly can change it. Then we’d tweak it and fix it manually so it matches. It’s going to save lots of numerous time.”

Borislav Slavov, a BAFTA-winning composer known for his work on Baldur’s Gate 3, similarly believes AI could push composers out of their comfort zones and permit them to “focus far more on the essence – getting inspired and composing deeply emotional and powerful themes.”

But for each voice heralding the advantages of AI-driven game design, there’s one other sounding a note of caution. 

Jess Hyland, a veteran game artist and member of the Independent Workers Union of Great Britain’s game employees branch, told the BBC, “I’m very aware that I could get up tomorrow and my job might be gone.”

For some, the fear of job displacement is within the crosshairs, as is the basic nature of creativity in game development. 

“It’s not why I got into making games,” Hyland adds, expressing concern that artists might be reduced to mere editors of AI-generated content. “The stuff that AI generates, you change into the person whose job is fixing it.”

Moreover, while AAA studios might see AI as a tool for pushing the boundaries of what’s possible, the image could look very different for indie developers. 

While AI game development could lower barriers to entry, allowing more people to bring their ideas to life, it also risks flooding the market with AI-generated content, making it harder for truly original works to chop through the noise. 

Chris Knowles, a former senior engine developer at UK gaming firm Jagex, warns that AI can potentially exacerbate game cloning, particularly within the mobile market. “Anything that makes the clone studios’ business model even cheaper and quicker makes the difficult task of running a financially sustainable indie studio even harder,” he cautioned. 

There is a resistance brewing. The US SAG-AFTRA union, which launched the landmark Hollywood strike against production studios last 12 months, recently initiated one other strike over video game rights.

While it doesn’t strictly affect developers, the strike goals to stop game studios from using AI to displace performers, like voice actors and motion performers, using AI without fair compensation. 

The query is, how do you truly balance AI’s risks to the industry with the potential upsides for gamers, and gaming as a complete ecosystem? Can we actually stop AI from revolutionizing an industry that prides itself on pushing technological boundaries?

After all, gaming is an industry that has truly relied on technology to progress.

Earlier within the 12 months, we spoke to Chris Benjaminsen, co-founder at AI game startup FRVR, who explained to us:

He argued that this permits people to drive game creation from the bottom up. It acts as an antidote to large-scale developers who select projects based on their financial returns.

It’s a pertinent point. Amidst all of the industry disruption, we will’t lose sight of a very powerful stakeholder: the players. 

Advances in personalized gameplay, combined with granting people the ability of selection over their gaming experiences, could usher in a brand new era of play-driven gaming experiences.

Yes, developer jobs will likely get squeezed like many others across the creative industries. It’s never ideal for people to lose their jobs,  but it surely salts the wound when those changes also compromise the standard of the products they once loved creating.

There is hope, nonetheless, that AI’s role in gaming might be more more likely to result in healthful outcomes than in visual arts or film & TV. Game developers could also find it easier to adapt to and work alongside AI tools moderately than being replaced by them.

This contrasts with fields like music, where AI can generate products directly competing with human artists. The technical complexity of game development may additionally create a more complementary relationship between AI and human creativity than a purely competitive one. 

In the top, AI’s impacts on game development might be debated for years to return. Like any powerful tool, its impact will rely upon a wide range of stakeholders, from the players to the studios.

Will AI proceed to push the boundaries of interactive entertainment? Or will we allow it to homogenize game development, sacrificing the unique visions of human creators on the altar of efficiency?

The answer, like the long run of gaming itself, and indeed that of many other creative industries, is yet to be written. But one thing is definite: the controller is in our hands for now. 

How we play this next level will determine the course of the sport industry for generations to return.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read