RISC-V chipmaker SiFive has announced a new series of AI accelerators for high-performance workloads.
The company says the Intelligence XM Series is its first IP to include a scalable matrix engine, designed to accelerate time-to-market for semiconductor companies building system-on-chip solutions for Edge, IoT, and data centres.
Based on the RISC-V open-source chip architecture, the Intelligence XM Series features four X-Core chips per cluster.
Each cluster can deliver 16 tera operations per second (INT8) or eight teraflops per second (BF16) per GHz. There is one TBps of sustained memory bandwidth per XM Series cluster, with the clusters being able to access memory via a high bandwidth port or via a CHI port for coherent memory access.
Patrick Little, CEO of SiFive, said, “Many companies are seeing the benefits of an open processor standard while they race to keep up with the rapid pace of change with AI.”
“AI plays to SiFive’s strengths with performance per watt and our unique ability to help customers customise their solutions. We’re already supplying our RISC-V solutions to five of the ‘Magnificent 7’ companies (Alphabet, Amazon, Apple, Meta, Microsoft, Nvidia, and Tesla), and as companies pivot to a ‘software first’ design strategy, we are working on new AI solutions with a wide variety of companies from automotive to data centre and the intelligent Edge and IoT.”
SiFive also announced its intention to open source a reference implementation of its SiFive Kernel Library, a move it says shows the company’s support for the wider RISC-V community.
“RISC-V was originally developed to efficiently support specialised computing engines including mixed-precision operations,” said Krste Asanovic, SiFive Founder and Chief Architect. “This, coupled with the inclusion of efficient vector instructions and the support of specialised AI extensions, are the reasons why many of the largest data centre companies have already adopted RISC-V AI accelerators.”