30 Views

Semidynamics RISC-V AI IP selected for LLM applications

LinkedIn Facebook X
December 09, 2024

Get a Price Quote

A leading IP company for high performance, AI-enabled, RISC-V processors, Semidynamics has announced that it has been selected by UPMEM as its core provider for its next generation of LPDDR5X Processing In Memory (PIM) device.

The standard RISC-V architecture with the integrated Tensor Unit along with the long latency, data access optimizer named Gazillion, allows a seamless and efficient integration of any AI or LLM models. The Tensor Unit performs matrix multiplications required by AI. It offers low power operation, is easy to program as no DMAs are needed and provides universal RISC-V compatibility by working under any RISC-V vector-enabled Linux without any changes.

The IP delivers a massive internal bandwidth of 102.4GB/s with low energy data access of 1pJ/bit, as well as significant processing capabilities of 8 TFLOPs FP16/BF16 and 16 TOPs int8. These capabilities allow model inference at the best industry performance and energy level for a single chip.

By integrating co-processors into main DRAM memory, UPMEM accelerates data-intensive operations such as Generative AI. Developing a new generation of PIM chips for Large Language Model (LLM) inference, UPMEM enables small to large models (such as LLama and Mistral) to run locally on smartphones, processing up to 15 times more queries per second while consuming 10 times less energy than leading SoCs. UPMEM’s PIM chips are the only solution that allows intensive GenAI compute while meeting smartphone battery life and cost constraints.

Gilles Hamou, CEO of UPMEM, said, “SemiDynamics GenAI compute RISC-V  IP combines the power and efficiency needed for our disruptive Processing In Memory DRAM chips for mobile. Our combined technologies provide the only solution to be powerful enough and sufficiently energy and cost effective to compute most generative AI compute on the smartphone.”

Roger Espasa, CEO of Semidynamics, added, “We are working with UPMEM and supporting them with their Process In Memory approach. This is an extremely innovative way to enable deployment of Large Language AI models and we look forward to a long-term partnership with UPMEM.”

Recent Stories