Domino Data Lab Announces New Ecosystem Solutions with NVIDIA 


Domino Data Lab, the provider of the leading Enterprise MLOps platform, announced two solutions with NVIDIA ecosystem partners that accelerate customers’ journeys towards hybrid MLOps.

A major milestone complementing its Nexus hybrid and multi-cloud MLOps capabilities, a new Domino integration with NVIDIA GPUs and NetApp data management and storage solutions will allow teams to easily run AI/ML workloads in either data centres or AWS without refactoring them. To support partners as they advance AI centres of excellence with full-stack solutions, Domino has also released an NVIDIA-validated reference architecture for integrating MLOps and on-premises NVIDIA DGX systems.

Becoming hybrid and multi-cloud enabled is the next step change in modern IT infrastructure. According to Forrester, 71 per cent of organisations consider hybrid cloud support a critical AI platform capability for executing their AI strategy. Enterprises need hybrid and multi-cloud data science capabilities to scale outcomes everywhere the organisation operates and stores data – across clouds, cloud regions, and on-premises infrastructure. These capabilities are a must for driving innovation and time to value with data science, complying with data sovereignty laws, optimising cost and mitigating vendor lock-in.

Domino innovates with NetApp to announce seamless AI/ML workload management across environments

To advance the hybrid MLOps vision, NetApp, a provider of powerful AI data management solutions, has validated Domino Nexus as a new solution supporting the Domino Enterprise MLOps Platform on Amazon FSx for NetApp ONTAP. Supporting evolving hybrid workload requirements, the new AWS Managed Service (AMS) solution will simplify the deployment and management of large-scale applications in hybrid real-time environments.

Domino Nexus will provide a holistic view of enterprise-wide data science workloads across all regions and environments, including the newly announced AMS solution and the existing on-premises NetApp ONTAP AI integrated solution. NetApp’s Cloud Manager will act as a control plane to build, secure, protect, and govern data with enterprise-class data services — including replication of data across environments with SnapMirror on Amazon FSx for NetApp ONTAP and transfer of network-attached storage (NAS) data between on-premises and cloud object stores with Cloud Sync. This new approach automates the data science process and accelerates AI workload deployment.

“The NetApp, Domino, and NVIDIA collaboration continues to deliver industry-leading solutions,” said Phil Brotherton, Vice President of Solutions and Alliances at NetApp. “As hybrid and multi-cloud enterprise IT strategies continue to evolve, we’re committed to delivering best-in-class model training infrastructure together with our AI industry leaders.”

Domino builds the foundation for optimised on-premises AI centre of excellence partner solutions with NVIDIA

In support of its Hybrid MLOps vision, Domino, with NVIDIA, has created an integrated MLOps and on-premises GPU reference architecture validated by both technology providers for optimal performance across NVIDIA DGX Systems. The companies are enabling joint ecosystem solution partners such as Mark III Systems to build industry-leading, end-to-end AI platforms, systems, and software – all custom fit for customers’ enterprise IT infrastructure and strategies.

“Although the transformative potential of AI is obvious, the journey to building an AI Center of Excellence is not, requiring deep, full-stack AI expertise and a diverse team,” said Andy Lin, Vice President of Strategy & Innovation and CTO at Mark III Systems. “We’ve already seen some incredible client successes working with Domino, NVIDIA, and NetApp, and will continue to work with enterprises and institutions to realize a competitive advantage through AI.”

A Flexible Path Through Complex Territories: On-Premises and Multi-cloud

By 2026, nearly all multinational organizations will invest in local data processing infrastructure and services to mitigate against the risks associated with data transfer, according to David Menninger, SVP & Research Director at Ventana Research. Domino’s strategy for hybrid MLOps – launching as a component of the Domino Enterprise MLOps Platform in early 2023 – aims to meet this need and others commonly faced by companies as they scale data science workloads.

“The decision to train or operationalize models on-premises or in the cloud is driven by model performance, security, and regulatory considerations,” said Thomas Robinson, VP of Strategic Partnerships and Corporate Development at Domino Data Lab. “Building an ecosystem of partner solutions helps customers navigate investment decisions in complex hybrid and multi-cloud environments.”

“Scaling enterprise AI requires flexible solutions custom-tailored to align with IT infrastructure, policies, and practices,” said Matt Hull, Vice President of Global AI Data Center Solution Sales at NVIDIA. “NVIDIA and Domino Data Lab are working to enable our ecosystem of expert partners to help customers maximise productivity, with quick time-to-value using end-to-end AI solutions.”