40 Views

Three Layer AI Neural Network for On-Chip Microcontroller Training

LinkedIn Facebook X
June 09, 2025

Get a Price Quote

Rohm has introduced a groundbreaking AI-enabled microcontroller that boasts being the first of its kind to feature on-chip training capabilities. The ML63Q253 and Q255 devices have been specifically designed for fault and anomaly prediction, as well as degradation forecasting, utilizing sensing data in various applications such as motors in both industrial and consumer settings.

These innovative devices are powered by a 32-bit ARM Cortex-M0+ core, complemented by a hardware AI accelerator core featuring a CAN FD controller, 3-phase motor control PWM, and dual A/D converters. Impressively, they exhibit a power consumption of around 40mW, making them efficient and suitable for a wide range of applications.

Traditionally, edge AI models have relied heavily on network connectivity and high-performance CPUs, which can be costly and challenging to implement. In contrast, endpoint AI conducts training in the cloud and executes inference on local devices, still necessitating network connectivity. These models typically rely on software for inference, requiring GPUs or high-performance CPUs.

However, Rohm has taken a different approach by implementing a straightforward 3-layer neural network algorithm to deploy its proprietary Solist-AI model on the AxlCORE-ODL accelerator core. This setup enables independent learning and inference, eliminating the need for cloud or network connectivity.

By supporting unsupervised learning and training on motor data to establish a baseline and monitor deviations, as well as supervised learning to identify potential anomalies, these MCUs offer a comprehensive solution for predictive maintenance and anomaly detection.

  • AI accelerator brings generative AI to Raspberry Pi 5 
  • The AI architecture in the Imagination E-series GPU
  • Edge accelerator for generative AI

This unique combination empowers the MCUs to autonomously conduct both learning and inference through on-device training, eliminating the complexity and cost associated with cloud-based training. Moreover, on-chip learning enables adaptable customization to various installation environments and unit-to-unit differences, even within the same equipment model.

Rohm has developed an AI simulation tool called Solist-AI Sim, allowing users to assess the efficacy of learning and inference before deploying the AI microcontroller. The data generated by this tool can also serve as the basis for supervised training of the AI MCU, enhancing validation and improving inference accuracy.

The AxlCORE-ODL hardware accelerator delivers processing speeds that are 1000 times faster than previous software-based MCUs operating at 12MHz. This enhancement enables real-time detection and precise numerical output of anomalies that deviate from the established baseline.

Rohm has outlined plans to release 16 devices with varying memory capacities, package types, pin counts, and packaging specifications. Mass production of eight models in the TQFP package commenced sequentially in February 2025, including two models featuring 256KB of Code Flash memory and taping packaging, with evaluation boards available for these devices.

Recent Stories