Exxact Corporation, a provider of high-performance computing (HPC), artificial intelligence (AI), and data centre solutions, will soon offer a new line of Nvidia Quantum InfiniBand solutions that enables industry-standard networking, clustering, storage, and management protocols to seamlessly operate over a single “one-wire” converged network.
The Nvidia Quantum-2 InfiniBand networking platform enables extreme performance, broad accessibility, and strong security for cloud computing and supercomputing data centers.
What makes Nvidia Quantum-2 stand out from the previous generation is the new switch that delivers unprecedented port density with 64 ports of 400Gbps (or 128 ports of 200Gbps) connectivity.
It also features two updated networking endpoint options: the Nvidia BlueField-3 data processing unit (DPU) and the NVIDIA ConnectX-7 network adapter. ConnectX-7 doubles the data rate of its predecessor, as well as increases performance benefits with RDMA (remote direct memory access), GPUDirect Storage, GPUDirect RDMA, and In-Network Computing. BlueField-3 offers 16 64-bit Arm CPUs to offload, accelerate and isolate the data center infrastructure stack, which accelerates application performance, enhances data center security, streamlines operations and reduces total cost of ownership.
Also updated is Scalable Hierarchical Aggregation and Reduction Protocol v3 (SHARPv3), which allows deep learning training operations to be offloaded and accelerated by the networking hardware, resulting in 32x more acceleration for AI applications compared with the previous version.
“Exxact is excited to soon offer NVIDIA’s latest generation of Quantum InfiniBand networking solutions to complement our expansive line of NVIDIA-Certified HPC and deep learning servers,” said Andrew Nelson, VP of Technology at Exxact Corporation. “Users will be able to address the infrastructure needs of the most demanding applications and experience a significant boost in data center performance, as well as a completely centralized management that supports most popular cluster topologies, such as Fat Tree, DragonFly+, and Torus.”
“The most challenging problems in the world will be solved by increasingly complex HPC systems that expand humanity’s understanding and deliver countless benefits to society,” said Gilad Shainer, SVP of Networking at NVIDIA. “The Nvidia Quantum-2 InfiniBand platform equips next-generation HPC and AI systems with the extreme performance demanded by innovative applications and workloads to supercharge breakthrough exploration.”
Nvidia Ethernet Networking is also available with SmartNICs that utilise Nvidia BlueField-2 with up to 200Gbps of Ethernet connectivity for modern GPU-accelerated workloads, best-in-class Ethernet switches with speeds from 10Gbps to 400Gbps, and LinkX Ethernet direct attach copper (DAC) cables, active optical cables (AOCs), and optical transceivers that meet or exceed all of the IEEE 802.3xx industry standards for 1G, 10G, 25G, 50G and 100G products.
The Nvidia Quantum-2 InfiniBand networking platform will be available soon.