Domino Data Lab, a provider of the leading Enterprise MLOps platform, announced new integrations with Nvidia that extend the fast and flexible deployment of GPU-accelerated machine learning models across modern tech stacks – from data centres to dash cams.
Domino is the first MLOps platform integrated with Nvidia Fleet Command, enabling seamless deployment of models across edge devices, in addition to Domino’s recent qualifications for the Nvidia AI Enterprise software suite. New curated MLOps trial availability through Nvidia LaunchPad fast-tracks AI projects from prototype to production, while new support for on-demand Message Passing Interface (MPI) clusters and Nvidia NGC streamline access to GPU-accelerated tooling and infrastructure, furthering Domino’s market-leading openness.
“Streamlined deployment and management of GPU-accelerated models bring a true competitive advantage,” said Thomas Robinson, VP of Strategic Partnerships & Corporate Development at Domino. “We led the charge as the first Enterprise MLOps platform to integrate with Nvidia AI Enterprise, Nvidia Fleet Command, and Nvidia LaunchPad. We are excited to help more customers develop innovative use cases to solve the world’s most important challenges.”
Edge Device Support Streamlines Model Deployment across Modern Tech Stacks through MLOps
Domino’s new support for the Fleet Command cloud service for edge AI management further reduces infrastructure friction and extends key enterprise MLOps benefits — collaboration, reproducibility, and model lifecycle management – to Nvidia-Certified Systems in retail stores, warehouses, hospitals, and city street intersections.
This integration relieves data scientists of IT and DevOps burdens as they manage, build, deploy, and monitor GPU-accelerated models at the edge. Data scientists can quickly iterate on models using Domino’s Enterprise MLOps Platform, then use Fleet Command to orchestrate the edge AI lifecycle using the turnkey solution to streamline deployments, manage over-the-air updates, and monitor models with minimal infrastructure footprint.
Accelerated Proofs-of-Concept with the First MLOps Platform on Nvidia LaunchPad
Further deepening Domino’s collaboration with Nvidia to accelerate model-driven business, the company’s Enterprise MLOps platform is also now the first available through the Nvidia LaunchPad program. LaunchPad enables enterprises to get immediate, short-term access to Nvidia AI Enterprise on VMware vSphere with Tanzu running on private accelerated compute infrastructure and curated labs.
Teams can use LaunchPad to quickly test AI initiatives on the complete stack underpinning joint Domino and Nvidia AI solutions and can get hands-on experience from a lab that demonstrates how to scale data science workloads with Domino’s Enterprise MLOps platform. This experience instantly delivers MLOps benefits – collaboration and reproducibility — optimised and pre-configured for purpose-built AI infrastructure. With proof-of-concept in LaunchPad validated by Domino and Nvidia, teams get the confidence to deploy at a production scale on the same complete stack they can purchase.
“Enterprise AI requires fast iteration with seamless, flexible model deployment to deliver results that make an impact for businesses,” said Manuvir Das, VP of Enterprise Computing at Nvidia. “Nvidia’s collaboration with Domino helps customers accelerate time-to-value for their AI investments, with deployment options for data scientists and developers across every stage of their AI journey.”
Support for On-Demand MPI Clusters and Nvidia NGC Streamlines MLOps with GPU-Optimised Software
Further new integrations bring the added Enterprise MLOps benefits of interactive workspaces, collaboration, reproducibility, and democratised GPU access to Nvidia’s expanding portfolio of GPU-optimised solutions.
New support for on-demand MPI clusters allows data scientists to use Nvidia DGX nodes in the same Kubernetes cluster as Domino. Available today for Domino environments and NGC images, this new integration eliminates time wasted by data scientists on administrative DevOps tasks so they can start innovating on deep learning models.
Domino also now natively supports Nvidia’s NGC catalogue and Nvidia AI platform. With a hub of AI frameworks (such as PyTorch or TensorFlow), industry-specific SDKs, and pre-trained models, this GPU-optimised AI software simplifies and accelerates end-to-end workflows. Data science teams can now run NGC containers in Domino while maintaining two-way code interoperability with raw NGC containers. Domino will continue to expand support for the Nvidia AI platform through the new Nvidia AI Accelerated program.