At the embedded world Exhibition&Conference in Nuremberg, Ceva’s NeuPro-Nano AI processor has been named the winner in the artificial intelligence category of the embedded award 2026. The award recognises technologies considered to represent significant advances in embedded system design and development.
The NeuPro-Nano is designed as a compact neural processing unit (NPU) intended to bring machine-learning inference to embedded and IoT devices with limited power and compute resources.
The embedded award is presented annually during embedded world to recognise innovative hardware, software and system developments in the embedded ecosystem. In 2026, the competition received more than 110 submissions, reflecting a wide range of technologies from established semiconductor vendors, specialised companies and start-ups.
“The innovative spirit within the embedded community remains impressive in 2026. We are witnessing a continuous convergence of technologies: AI capabilities are increasingly moving to the edge, vision systems are becoming more intelligent, and highly sophisticated hardware is achieving new levels of efficiency and scalability. This interplay of developments makes the current momentum particularly fascinating,” says Prof. Dr.-Ing. Axel Sikora, Chairman of the embedded world Conference and member of the embedded award jury.
“What also stands out is how companies tackle similar technical challenges from entirely different angles – often with clever and highly distinctive approaches.”
The NeuPro-Nano is designed as a standalone neural processing unit (NPU) IP core for embedded machine-learning workloads. The processor integrates its own processing, code execution and memory management capabilities within a self-contained architecture. It is fully programmable and can execute neural networks alongside signal processing and control code. The architecture supports modern machine-learning operators, including transformer workloads, sparsity acceleration and fast quantisation.
Two configurations are available, NPN32 and NPN64, targeting different performance levels. The NPN64 version adds hardware sparsity acceleration that can double effective performance. The processor supports integer data types from 4-bit to 32-bit and is designed to execute neural networks efficiently within constrained embedded environments.
To reduce memory requirements in edge devices, the architecture includes hardware weight decompression that can shrink the memory footprint of neural-network models by up to 80%. The processor also integrates power-management features such as dynamic voltage and frequency scaling to optimise energy consumption in always-on systems.
Typical applications include voice interfaces, vision processing, audio analysis and sensor-based AI tasks in battery-powered devices across consumer, industrial and IoT markets.
Ceva presented the NeuPro-Nano during embedded world 2026 at Hall 4, Booth 4-462, where the company demonstrated how the processor can be integrated into next-generation embedded platforms requiring efficient edge AI capabilities.