Omniverse Cloud will allow 3D designers and developers from anywhere in the world to collaborate in virtual world-building.
Meta may have popularised the idea of the metaverse in recent months, but it is far from the only company working on developments in this area.
Nvidia joined the bandwagon with Omniverse, a platform for real-time 3D simulation and design collaboration, and is now expanding the scope of this platform to accelerate the development of virtual worlds.
At its Graphics Technology Conference (GTC) yesterday (22 March), Nvidia made a spate of new product announcements including Omniverse Cloud. This will make it easier for designers and developers to collaborate in virtual worlds for a variety of applications.
By making Omniverse available in the cloud, it will allow millions of developers across the world to collaborate on a much larger range of devices than was previously possible. Until now, developers required an Nvidia RTX-based system to access Omniverse.
Jensen Huang, co-founder and CEO of Nvidia, said that with the new cloud feature, designers working on Omniverse remotely will be able to collaborate “as if in the same studio”.
He gave the example that factory planners could work inside a ‘digital twin’ of a plant – a virtual version of the real thing – to design a new production flow before testing it out practically. Similarly, software engineers could test a new software build on the digital twin of a self-driving car before it is launched.
“A new wave of work is coming that can only be done in virtual worlds. Omniverse Cloud will connect tens of millions of designers and creators, and billions of future AIs and robotic systems,” said Huang.
By subscribing to the Omniverse Cloud programme, which is in the early-access stage now, designers can exploit the full potential of the Omniverse Create and View services without needing devices with high-end GeForce or Nvidia RTX systems, or upgrading IT infrastructure.
Richard Kerris, VP of Omniverse, told reporters this week that the new cloud service is a response to “huge demand we’ve had from a number of customers who wanted access to this platform but were limited because of the platform they’re on”.
In an interview with VentureBeat, Kerris said that more than 150,000 users have downloaded Nvidia Omniverse to use in a wide range of applications from game development to industrial digital twins, adding that the metaverse is the network for the next generation of the web.
“We’re focused on the business and industrial side of virtual worlds, which we’re seeing tremendous feedback already from our customers, and use cases that are applicable today,” he said.
Other Nvidia products
At the GTC event yesterday, Nvidia also unveiled its latest line of graphics cards, the Hopper GPU architecture. The new technology, named after US computer scientist and ‘first lady of software’ Grace Hopper, is set to succeed Nvidia’s Ampere architecture launched two years ago.
H100, the first graphics card in the Hopper series, consists of 80bn transistors and a transformer engine that can speed up specific categories of AI models. Coupled with Nvidia’s MIG technology, the H100 can be divided into seven GPU instances to multi-task efficiently.
“Data centres are becoming AI factories – processing and refining mountains of data to produce intelligence,” Huang said. “Nvidia H100 is the engine of the world’s AI infrastructure that enterprises use to accelerate their AI-driven businesses.”
Another major product reveal was the Arm-based Grace CPU Superchip designed for AI and high-performance computing. It comprises two CPU chips connected over a new high-speed, low-latency, chip silicon integration system for up to 25 times greater energy efficiency.
“The Grace CPU Superchip offers the highest performance, memory bandwidth and Nvidia software platforms in one chip and will shine as the CPU of the world’s AI infrastructure,” Huang said.
10 things you need to know direct to your inbox every weekday. Sign up for the Daily Brief, Silicon Republic’s digest of essential sci-tech news.