VAST and Genesis Cloud are poised to make AI and accelerated cloud computing more efficient, scalable and accessible to organisations across the globe.
VAST Data, an AI data platform company, announced a strategic partnership with Genesis Cloud, an Infrastructure-as-a-Service (IaaS) provider for GPUs and accelerators in the cloud. Together, VAST and Genesis Cloud are poised to make AI and accelerated cloud computing more efficient, scalable and accessible to organisations across the globe.
Dr. Stefan Schiefer, CEO at Genesis Cloud, said, “To complement Genesis Cloud’s market-leading compute services, we needed a world-class partner at the data layer that could withstand the rigors of data-intensive AI workloads across multiple geographies. The VAST Data Platform was the obvious choice, bringing performance, scalability and simplicity paired with rich enterprise features and functionality. Throughout our assessment, we were incredibly impressed not just with VAST’s capabilities and product roadmap, but also their enthusiasm around the opportunity for co-development on future solutions”.
Chris Morgan, Vice President, Solutions at VAST Data, said, “With the VAST Data Platform, Genesis Cloud offers organisations access to Europe’s most performant and efficient GPU-accelerated cloud services, optimised to suit the needs of their business – from speed, to scale, to security and compliance. As VAST continues to expand our global presence in Europe and beyond, our partnership with Genesis Cloud allows us to serve our joint customers with flexible, high-performance infrastructure solutions and services for their growing AI and inference pipelines”.
Genesis Cloud helps businesses optimise their AI training and inference pipeline by offering performance and capacity for AI projects at scale while providing enterprise-grade features. The company is using the VAST Data Platform to build the most comprehensive set of AI data services in the industry. With VAST, Genesis Cloud is leading a new generation of AI initiatives and Large Language Model (LLM) development by delivering highly automated infrastructure with exceptional performance and hyperscaler efficiency.