Users of DGX Cloud Lepton can access GPU compute capacity for both on-demand and long-term computing within specific regions.
Nvidia has launched an AI platform, bringing together its GPUs from various global cloud providers.
Dubbed Nvidia DGX Cloud Lepton, the platform and compute marketplace connects GPUs from providers including CoreWeave, Crusoe, Firmus, Foxconn, GMI Cloud, Lambda, Nebius, Nscale, SoftBank Corp., and Yotta Data Services.
Users of DGX Cloud Lepton can access GPU compute capacity for both on-demand and long-term computing within specific regions to support sovereign AI operational needs.
“Nvidia DGX Cloud Lepton connects our network of global GPU cloud providers with AI developers,” said Jensen Huang, Founder and CEO of Nvidia.
“Together with our NCPs, we’re building a planetary-scale AI factory.”
ALSO READ: QualityKiosk And BrowserStack Announce Partnership Extension
According to Nvidia, the platform will bring together tens of thousands of GPUs, including the Blackwell series, and other Nvidia architecture GPUs. The platform integrates with Nvidia’s software stack, including the Nvidia NIM and NeMo microservices, and Nvidia Blueprints and Cloud Functions.
Nvidia acquired server rental company Lepton AI in April 2025, with the GPU provider taking over the company that leases GPU servers from cloud providers and rents them to its own customers.
Nvidia also has a cloud within a cloud-type offering. Launched in 2023 and called DGX Cloud, the service is offered on top of other companies’ cloud platforms. The cloud providers lease Nvidia’s servers and deploy them as a cloud that Nvidia can market and sell to enterprises looking for large GPU supercomputers.
Google, Microsoft, Oracle, and AWS are all adopters of the offering, with AWS only embracing the solution in December 2024.
ALSO READ: Nvidia to Sell NVLink Fusion to Speed AI Chip Communication