Blackwell architecture GPUs are manufactured using a custom-built, two-reticle limit 4NP TSMC process.
Microsoft Azure has Nvidia’s Blackwell system up and running on GB200-powered AI servers. The company shared news of the deployment in a post to X, formerly known as Twitter.
The post said: “Microsoft Azure is the 1st cloud running Nvidia’s Blackwell system with GB200-powered AI servers. We’re optimizing at every layer to power the world’s most advanced AI models, leveraging Infiniband networking and innovative closed-loop liquid cooling. Learn more at MS Ignite.”
Microsoft was previously rumoured to be the first to gain access to the Blackwell servers.
According to a report from Tom’s Hardware, the machine Microsoft has deployed is not NVL72 GB200, but is at least one GB200-based server rack with an unknown number of B200 processors.
The rack is likely to be used for testing purposes of both the Blackwell GPUs, and the liquid cooling system, and commercial deployment will follow in coming months. More details are expected to be shared at Microsoft’s Chicago event, MS Ignite.
Nvidia announced the Blackwell GPU family in March of this year. Blackwell architecture GPUs are manufactured using a custom-built, two-reticle limit 4NP TSMC process with GPU dies connected by 10TBps chip-to-chip link into a single, unified GPU.
The GPU has 208 billion transistors, an increase on the 80bn in the Hopper series, and includes a second-generation transformer engine and new 4-bit floating point AI inference capabilities.
It is estimated that an NVL72 GB200 machine with 72 B200 graphics processors will require around 120kW of power.
In August, the Blackwell GPU family was reported to be facing delays due to an unexpected design flaw. Later that month, the company stated that the issue had been resolved.
Others betting on the Blackwell chips include Google, Meta, and CoreWeave, having ordered massive amounts of the GPUs.