56 Views

AI coprocessor enhances NPU capabilities

LinkedIn Facebook X
May 08, 2025

Get a Price Quote

Cadence has unveiled the Tensilica NeuroEdge 130 AI co-processor, a cutting-edge technology designed to complement neural processing units (NPUs) and facilitate end-to-end execution of the latest AI networks across various sectors such as automotive, consumer electronics, industrial applications, and mobile systems. Building upon the successful architecture of the Tensilica Vision DSP family, the NeuroEdge 130 AI co-processor offers significant benefits including more than 30% area savings and over 20% reduction in dynamic power consumption without compromising performance. Moreover, it utilizes the same software, AI compilers, libraries, and frameworks to expedite time-to-market.

According to Karl Freund, founder and principal analyst of Cambrian AI Research, the increasing adoption of AI processing in physical AI applications like autonomous vehicles, robotics, industrial automation, and healthcare has elevated the importance of NPUs. While NPUs handle intensive AI/ML workloads, there is a growing need to offload non-MAC layers to specialized processors for enhanced efficiency. The industry requires a low-power, high-performance solution optimized for co-processing to meet evolving AI processing demands.

The Tensilica NeuroEdge 130 AI co-processor boasts an extensible design that ensures seamless compatibility with in-house NPUs, Cadence Neo™ NPUs, and third-party NPU IP. By efficiently handling offloaded tasks, this co-processor outperforms its application-specific predecessors in terms of performance and energy efficiency. Leveraging the power, performance, and area advantages of Tensilica DSPs, the NeuroEdge 130 delivers significant area savings and reduced power consumption while maintaining performance levels comparable to Tensilica Vision DSPs on AI networks.

Featuring a VLIW-based SIMD architecture with configurable options, the NeuroEdge 130 issues instructions and commands to the NPU as a control processor. Its optimized ISA and instructions cater to non-NPU optimal tasks, enhancing programmability and flexibility for future AI workloads. This co-processor provides a high level of performance and energy efficiency, ensuring seamless execution of current and future AI tasks.

The Tensilica NeuroEdge 130 AI co-processor is backed by the Cadence NeuroWeave™ Software Development Kit (SDK), a versatile tool used across all of Cadence’s AI IP offerings. By leveraging the Tensor Virtual Machine (TVM) stack, the NeuroWeave SDK simplifies the tuning, optimization, and deployment of AI models for Cadence’s AI IP. Additionally, the NeuroEdge 130 features a lightweight standalone AI library, enabling customers to program AI layers directly on the processor and bypass potential overheads associated with certain compiler frameworks.

Boyd Phelps, senior vice president and general manager of the Silicon Solutions Group at Cadence, highlighted the significance of the Tensilica NeuroEdge 130 AI co-processor in addressing the evolving needs of AI SoC and systems customers. By introducing a purpose-built class of processors, Cadence aims to provide a small and efficient AI-focused co-processor that enhances performance efficiency and future-proofs demanding AI applications. The NeuroEdge 130 represents a significant advancement in AI processing technology, offering a compelling solution for customers seeking improved power, performance, and area efficiency in their AI implementations.

Recent Stories