DDN A3I Storage Accelerates AI Recommendation Systems with NVIDIA DGX Systems

DDN A3I Storage Accelerates AI Recommendation Systems with NVIDIA DGX Systems

DDN A3I Storage Accelerates AI Recommendation Systems, Image Analysis and Natural Language Processing with Up to 46 Times More IOPS on GPU Infrastructure

Enterprises increasingly rely on artificial intelligence (AI) as a powerful business enabler with massive potential rewards. However, research has shown that a staggering nine out of ten AI initiatives fail due to complexities in architecting and optimising AI deployments and failing to successfully take them from prototype to production. DDN, the global leader in AI and multi-cloud data management solutions, takes the successful integration of AI in Enterprise IT infrastructures to the next level, by delivering powerful, easy to deploy, production-ready and cost-effective AI storage systems to its customers. DDN’s A3ITM (Accelerated, Any-Scale AI) solutions bring unmatched performance and complete control to AI workflows at any scale.

‘To achieve a strong competitive edge, enterprises must extract more value out of their data and greatly accelerate time to insight. For that to happen, a flexible and agile data storage platform which reliably and intelligently handles massive amounts of dynamic data is essential’, said Dr James Coomer, vice president of products, DDN. ‘For the past two decades, DDN has successfully deployed value-driven intelligent infrastructure storage solutions. Our increasing success in bringing AI-enabling data storage to more than 11,000 customers reinforces our reputation as a leader in At-Scale solutions.’

DDN and NVIDIA offer a joint integrated AI solution that is highly optimised across data storage, software, networking and GPU acceleration. As the only storage platform certified by NVIDIA for DGX SuperPOD™ with DGX™ A100 systems, DDN A3I handles hundreds of petabytes of dynamic data and is optimised for the entire end-to-end AI workload.

In addition to DGX SuperPOD support, DDN’s A3I portfolio has grown to include DGX POD offering entry-level infrastructure solutions for enterprise, starting with 2-node DGX A100 configurations for businesses that want a starter system that offers modular scale as AI workloads grow.

The DDN and NVIDIA joint solution delivers amazing value to customers through:

  • 33 times faster AI inference than traditional storage
  • 10 times faster AI data ingestion
  • 46 times more IOPS into GPU Infrastructure2

‘Advanced AI and data science applications process massive amounts of data, making it critical for storage to keep up with the accelerated computing of NVIDIA DGX systems’, said Tony Paikeday, senior director of DGX systems, NVIDIA. ‘The DDN A³I reference architectures for NVIDIA DGX SuperPOD and DGX POD enables organisations to rapidly implement the high-performance AI infrastructure they need to power their most important work at any level of scale.’

DDN A³I systems combine DDN storage and AI-powered data management tools to deliver acceleration in complex AI applications such as recommendation systems, image analysis and natural language processing (NLP). DDN A³I is fully optimised to accelerate machine learning, streamline deep learning workflows and eliminate complexity in deployment and management at any scale.

‘AI and machine learning-driven big data analytics are a popular next-generation application being deployed by enterprises undergoing digital transformation’, said Eric Burgener, research vice president, Infrastructure Systems, Platforms and Technologies, at IDC. ‘The introduction of this new, smaller NVIDIA DGX POD reference architecture makes it much easier to deploy the infrastructure to get AI workloads up and running quickly so they can start driving business value.’

DDN A3I client plugins for NVIDIA DGX systems and other GPU environments ensure applications run fast out of the box with AI container workloads. By delivering an optimised data path, A3I solutions overcome I/O difficulties previously considered insurmountable, accelerating performance-intensive applications and scalable data science use cases.