Nvidia’s grand plan to populate a parallel universe with digital versions of humans and objects will be powered by new hardware.
The chip maker introduced a new server called OVX 3 that is built to create and operate the metaverse, said Bob Pette, vice president for professional visualization at Nvidia, during a press briefing ahead of its annual GPU Technology Conference this week.
The third-generation OVX hardware includes four of Nvidia’s L40 GPUs, which was introduced last year. It also has a BlueField-3 data processing unit, which handles data throughput and networking within the system.
The metaverse server can be configured with Intel’s Sapphire Rapids chip or Advanced Micro Devices’ chip codenamed Genoa.
If that isn’t enough, about 64 OVX 3 servers could be packed into an OVX 3.0 SuperPOD. The servers will be linked through the Spectrum-3 network interconnect for low latency and high bandwidth communications.
“The CPU used to have to worry about infrastructure services….BlueField-3 will do that by freeing up the CPU to feed the GPUs,” Pette said.
Nvidia has championed the concept of metaverse for years, but it hasn’t evolved as quickly as generative AI, which took off with technologies like ChatGPT.
The L40 GPU is being targeted as the core hardware that generates images for Nvidia’s Omniverse, which is a bunch of software and hardware tools to create a digital universe.
The L40 is based on the Ada Lovelace architecture, has 48GB GDDR6 memory, and delivers 90.5 teraflops of FP32 performance. The L40 datasheet is available here (PDF).
Nvidia is installing the older OVX-2 systems in Microsoft’s Azure, which will host a service called Omniverse Cloud. The OVX-2 system will power Microsoft’s plans to bring the metaverse to Office 365.
“People do not need to bring their own OVX systems. It is getting everything from Azure and just supplements what they have already got with their virtual workstations and their AI instances as well,” Pette said.
Separately, Nvidia said Microsoft will host OVX systems as part of a new Azure offering called Omniverse Cloud, where companies will be able to build and simulate digital replicas of objects or people. The metaverse offering will be available in the second half of this year.
Nvidia also announced new GPUs at GTC. The L4 GPU is an accelerator for AI and graphics, and can be slotted into servers. Additionally, the L4 provides four times more ray tracing performance than previous generation chips. Beyond that, Nvidia cherry-picked comparative numbers, pitting it against weaklings to highlight the GPU’s strength.
For example, Nvidia said the L4 is 120 times faster for video-based AI compared to CPUs. It also provides 2.7 times more generative AI performance, which is an unfair benchmark given the technology is still emerging. The L4 will be used in Google Cloud and will be available from server makers that include Dell, Lenovo and Supermicro.
Nvidia also announced RTX Ada Generation GPUs for laptops, providing twice the performance and power efficiency of the comparable previous-generation laptop GPUs.
If you include DLSS 3 (Deep Learning Super Sampling 3), which uses AI to upscale video and images to higher resolutions, “there is a huge advance on the use of AI for rendering and visualization,” Pesse said.
The company also announced the RTX 4000 Ada Generation for small-form-factor desktops. It has 20GB GDDR6 memory, and supports multiple displays.
Beyond graphics, the new GPUs are designed for use with Omniverse, and will ship later this month.