The European AI Act, a groundbreaking piece of legislation, is having a profound effect on the world of artificial intelligence. This act is shining a spotlight on the safety and security of AI-based systems, prompting a closer examination of their use cases. Moreover, it is setting the stage for the development of future AI certification programs. However, before certification can take place, thorough evaluation is necessary.
AI systems are becoming increasingly prevalent, handling sensitive data and carrying out critical tasks across various environments. The expansive nature of the adversarial landscape makes it challenging yet essential to conduct comprehensive assessments. Particularly for machine learning models and deep neural networks (DNNs), the complexity of the attack surface poses significant challenges. These systems, essentially mathematical constructs implemented in a physical environment comprising software and hardware, are susceptible to various vulnerabilities.
Research conducted by CEA-Leti has shed light on the often-overlooked physical vulnerabilities present in common IoT devices, specifically deep neural network models running on 32-bit microcontrollers. The storage of internal model parameters within device memory makes them prime targets for attacks aiming to manipulate these parameters for purposes such as reverse-engineering or behavior alteration. One notable finding was the impact of Bit-Flip Attacks (BFAs) on AI models, where just a few bit-flips could substantially degrade the performance of convolutional neural network models, raising significant security and evaluation concerns.
Traditionally, research in this area has concentrated on Dynamic Random-Access Memory (DRAM). However, CEA-Leti's work has highlighted the importance of utilizing bit-sets, rather than bit-flips, for a BFA-like attack to extract confidential information from a protected black-box model. By analyzing the model's output decisions with and without faults to uncover parameter values, attackers can exploit them in reverse-engineering endeavors. This research is paving the way for the establishment of robust evaluation protocols for embedded AI models on Cortex-M platforms equipped with Flash memory.
Through a combination of theoretical analysis and laser fault injection, CEA-Leti's research aims to enhance understanding of the interplay between model characteristics and attack efficiency. Furthermore, the study underscores the value of simulation in streamlining the evaluation of AI models. These advancements in evaluating AI security are crucial not only for certification purposes but also for designing effective protection mechanisms for embedded AI systems, ensuring their resilience against potential threats.