AMD Selects Google Cloud For Scaling Chip Design Workloads


Google Cloud’s infrastructure will help AMD achieve more flexibility with semiconductor design needs

Google Cloud and AMD announced a technology partnership in which AMD will run electronic design automation (EDA) for its chip-design workloads on Google Cloud, further extending the on-premises capabilities of AMD data centres. AMD will also leverage Google Cloud’s global networking, storage, artificial intelligence, and machine learning capabilities to improve its hybrid and multi-cloud strategy for these EDA workloads.

“Leveraging the Google Cloud C2D instances powered by 3rd Gen EPYC processors for our complex EDA workloads has helped our engineering and IT teams tremendously. C2D has allowed us to be more flexible and provided a new avenue of high-performance resources to mix and match the right computing solution for our complex EDA workflows. We’re happy to work with Google Cloud to take advantage of their wealth of cloud features and the capabilities of 3rd Gen EPYC,” said Mydung Pham, Corporate Vice President, Silicon Design Engineering, AMD

Through this multi-year technology partnership, Google Cloud and AMD will continue to explore new capabilities and innovations, while AMD will enjoy benefits such as:

  • Increased flexibility and choice to run applications in the most efficient manner possible
  • Improved design and operations from applied Google Cloud artificial intelligence and machine learning tools and frameworks
  • More transparency with costs and resource consumption
  • Greater agility and less vendor lock-in

To remain flexible and scale quickly, AMD will add Google Cloud’s newest compute-optimized C2D VM instance, powered by 3rd Gen AMD EPYC  processors, to its suite of resources focused on EDA workloads. By leveraging Google Cloud, AMD anticipates running more designs in parallel, giving the team more flexibility to manage short-term compute demands without reducing allocation on long-term projects.