Petuum Unveils Enterprise MLOps Platform


Petuum unveiled their new enterprise MLOps platform for AI/ML teams, now in private beta. Petuum helps enterprise AI/ML teams operationalise and scale their machine learning pipelines to production with the world’s first composable platform for MLOps.

After years of development at CMU, Berkeley, and Stanford, as well as dozens of customer engagements in finance, healthcare, energy, and heavy industry, Petuum announced a limited release of their platform through an exclusive private beta for select customers.

Petuum’s enterprise MLOps platform is built around principles of composability, openness, and infinite extensibility. With universal standards for data, pipelines, and infrastructure, AI applications can be built from reusable building blocks and managed as part of a repeatable assembly-line process. Petuum’s users don’t need to worry about infrastructure or DevOps expertise, glue code, or tuning, and can instead focus on rapidly deploying more projects in less time, with fewer resources, and with less help from others.

The end-to-end platform includes the AI OS with low/no-code Kubernetes optimised for AI. Universal Pipelines allow low-expertise users to compose and execute DAGs with modular DataPacks for any kind of data. The low/no-code Deployment Manager can upgrade, reuse, and reconfigure pipelines in production with observability and user management. The platform also hosts a revolutionary experiment manager for amortised autotuning and optimising pipelines of models and systems.