Rather than just procedurally generating a cityscape, the latest Nvidia AI has created a driveable world solely from watching videos.
A major trend among video game developers over the past few years has been the use of procedurally generated worlds, whereby a game engine uses algorithms to create a universe as the player moves through a game. One of the biggest examples of the past few years is the game No Man’s Sky, which creates truly alien landscapes as the player flies around in their spaceship.
However, graphics hardware producer Nvidia has revealed something altogether more powerful, and arguably a lot smarter, and it can create digital environments without needing to build upon them manually.
In a blogpost, the company revealed that its latest artificial intelligence (AI) has successfully created a 3D, driveable cityscape for automotive, gaming or virtual reality by training models on videos from the real world. Using a conditional generative neural network as a starting point, the team trained a neural network to render new 3D environments from these videos of people driving around various cities.
Explaining it a bit further, Nvidia’s researchers said that the network works on high-level descriptions of a scene; so, for example, segmentation maps or edge maps that describe where objects are and what they’re like, such as whether a particular part of the image contains a car or a building.
Develop at a much lower cost
Based on what it has learned from watching countless videos of city streets, the AI is then able to fill in the details to create parked cars and apartment blocks. The engine used by Nvidia will be familiar to many small-time and major game developers: Unreal Engine 4.
For a developer using this technology, they could lay the groundwork for the world’s structure manually, and then watch as the AI adds the graphical detail that makes it come alive.
“One of the main obstacles developers face when creating virtual worlds – whether for game development, telepresence or other applications – is that creating the content is expensive,” said Bryan Catanzaro, vice-president of applied deep learning at Nvidia and leader of this research.
“This method allows artists and developers to create at a much lower cost, by using AI that learns from the real world.”
Based on the video released by the company, this is still very much in an early stage given the blurriness of the AI game world. It is an advanced version of an open source technology called pix2pix, used to create AI photography.
However, expectations are that a truly functional, commercial AI program to develop detailed game worlds is still a few decades away.
The team’s research so far has been published online.