The Top 5 Announcements Nvidia Made At GTC 2021

The-Top-5-Announcements-Nvidia-Made-At-GTC-2021

At the recent Nvidia’s fall Graphics Technology Conference 2021, the chipmaker laid out its plans to help enterprises enter the virtual world — expanding Omniverse.

“A constant theme you’ll see — how Omniverse is used to simulate digital twins of warehouses, plants and factories, of physical and biological systems, the 5G edge, robots, self-driving cars, and even avatars,” said Nvidia CEO Jensen Huang.

Typically, Nvidia uses GTC to take the covers off new products, and this year was no different. Here are the top five product announcements from GTC.

Omniverse

Virtual worlds are all the rage —  Facebook changed its corporate name to Meta to enter the metaverse and Microsoft announced its own metaverse vision with 3D avatars for Microsoft Teams. At GTC, Nvidia made a number of announcements related to its version of the metaworld, Omniverse, that make it easier  to train AI models to make the metaworld more realistic. With Omniverse, “we now have the technology to create new 3D worlds or model our physical world,” Huang said.

Nvidia introduced the Omniverse Replicator, a synthetic data-generation engine. Omniverse Replicator is a tool that should ultimately help organisations build better digital twins — and thus, better AI-powered tools in the real world.

Nvidia is introducing two different applications built with Replicator, demonstrating some of its use cases: Nvidia Drive Sim is a virtual world for creating digital twins of autonomous vehicles and Nvidia Isaac is designed for robots.

Next, Nvidia is taking Omniverse beyond replications of the real world with the new Omniverse Avatar platform. Avatar is a full end-to-end platform for creating embodied AIs that humans interact with. It connects Nvidia’s technologies in speech AI, computer vision, natural language understanding, recommendation engines and simulation technologies. Avatars created in the platform are interactive characters with ray-traced 3D graphics. They can see and speak on a wide range of subjects and understand naturally spoken intent. The many Nvidia technologies behind Avatar include Riva, a new, large software development kit for dealing with advanced speech AI.

Beyond Replicator and Avatar, Nvidia announced a range of other updates to Omniverse, including new AR, VR and multi-GPU rendering features.

Since its launch late last year, Omniverse has been downloaded 70,000 times by designers at 500 companies.

Zero-trust cybersecurity platform

There is no hotter topic in cybersecurity than zero trust. And at the GTC, Huang announced a three-pillar zero-trust framework to tackle the challenge. He announced a zero-trust platform that combines its BlueField data processing units, DOCA software development kits for BlueField, and the Morpheus security AI framework. The DPUs play a key role because they offload the processor-heavy tasks from the central processing units on the firewalls or servers that drive up the costs of those devices.

The DPU can handle processes such as validating users, isolating data and other tasks letting the firewalls and other devices do what they were meant to do.

DOCA 1.2 and Morpheus provide the developer tools and AI frameworks that are used to analyse traffic, inspect logs and application traffic, and customise zero trust. As part of the launch Juniper Networks and Palo Alto Networks were announced as vendors using the zero-trust platform.

Quantum

Huang introduced NVidia Quantum-2, “the most advanced networking platform ever built,” and with the BlueField-3 DPU, welcomes cloud-native supercomputing.

Quantum-2 offers the extreme performance, broad accessibility and strong security needed by cloud computing providers and supercomputing centers, he said.

The cuQuantum SDK speeds up simulations of quantum computers on classical systems. The first library from cuQuantum is currently in public beta, available to download. Called cuStateVec, it’s an accelerator for the state vector simulation method. That approach tracks the full state of the system in memory and can scale to tens of qubits. A second library coming next month is cuTensorNet, which is an accelerator using the tensor network method. It can handle up to thousands of qubits on some promising near-term algorithms.

Nvidia has integrated cuStateVec into qsim, Google Quantum AI’s state vector simulator, which can be used through Cirq, an open-source framework for programming quantum computers. In December, cuStateVec will be ready for use with Qiskit Aer, a high-performance simulator framework for quantum circuits from IBM.

Additionally, university research groups at Caltech, Oxford and MIT, and companies including IonQ are all integrating cuQuantum into their workflows. Nvidia also said it created the largest-ever simulation of a quantum algorithm for solving the MaxCut problem using cuQuantum. MaxCut algorithms are used to design large computer networks, find the optimal layout of chips with billions of silicon pathways and explore the field of statistical physics.

Also Read: Will Quantum Computing Be a Game-Changer?

Earth Two

Huang announced the company will build E-2, or Earth Two, to simulate and predict climate change. “We will build a digital twin to simulate and predict climate change,” he said, framing it as a tool for understanding how to mitigate climate change’s effects. “This new supercomputer will be E2, Earth 2, the digital twin of Earth, running Modulus-created AI physics at a million times the speed of the Omniverse. All the technologies we’ve invented up to this moment are needed to make E2 possible. I can’t imagine greater and more important news.”

Earth Two can be used to run global simulations that would allow leaders to make informed decisions instead of ones based merely on hope. In a press conference, Huang said the new supercomputer will be 100 per cent funded by Nvidia and will be specifically designed for simulations in the Omniverse environment. He did not reveal anything about collaborations with other companies or research institutes. Details about the location and architecture of the system are to be revealed at a later date.

Nemo Megatron

Huang introduced Nemo Megatron to train large language models. Such models “will be the biggest mainstream HPC application ever,” he said.

The company opened the door for enterprises worldwide to develop and deploy large language models (LLM) by enabling them to build their own domain-specific chatbots, personal assistants and other AI applications that understand language with unprecedented levels of subtlety and nuance. The NeMo Megatron framework for training language models with trillions of parameters, the Megatron 530B customisable LLM that can be trained for new domains and languages, and Nvidia Triton Inference Server with multi-GPU, multi-node distributed inference functionality.

Combined with Nvidia DGX systems, these tools provide a production-ready, enterprise-grade solution to simplify the development and deployment of large language models.