Nvidia reveals AI model that creates easy-to-edit 3D objects from photos

21 Jun 2022

A group of 3D model instrument created by the Nvidia 3D MoMa. Image: Nvidia

Nvidia said its new technique could make it easier for content creators to quickly import a 3D object into a graphics engine and edit it.

Nvidia researchers are showcasing new technology that could let graphics creators create realistic 3D models within an hour from a series of 2D images.

The tech company said its new method achieves this result through inverse rendering, which is a way to reconstruct a series of still photos into a 3D model of an object or scene.

Nvidia VP of graphics research David Luebke said inverse rendering has long been the “holy grail” that unifies computer vision and graphics. He added that this technique could help creators quickly produce 3D objects that can be imported and edited, without some of the limitations of existing tools.

The research team said game studios and other creators currently have to use complex photogrammetry techniques that require significant time and effort to create 3D objects.

In March, Nvidia showcased its research into neural radiance fields, which allowed researchers to create a 3D scene within seconds from a bunch of 2D photos taken at different angles. But these scenes were not created using a format that could be easily edited.

The new technique, called Nvidia 3D MoMa, generates triangle mesh models within one hour, which is directly compatible with the 3D graphics engines and modelling tools that creators already use.

It is hoped that MoMa will make it easier for architects, designers, concept artists and game developers to quickly import an object into a graphics engine to start working on it.

Nvidia Research showed the power of the technology in a video, where it created five 3D instrument models from around 100 photos of each instrument.

These were generated through inverse rendering and can be used as building blocks for a complex animated scene. The team was able to take the instruments out of their original scenes and import them into the Nvidia Omniverse 3D simulation platform to edit.

The researchers said creators would be able easily swap out the material of a shape generated by Nvidia 3D MoMa. For example, the team could change the material of the trumpet model from its original plastic to gold, marble, wood or cork.

Nvidia said the video was made to celebrate jazz and its birthplace, New Orleans, where the research paper behind 3D MoMa will be presented this week at a conference.

10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.

Leigh Mc Gowran is a journalist with Silicon Republic

editorial@siliconrepublic.com