U.S. and Australian cybersecurity agencies, in collaboration with international partners, have unveiled new secure AI integration principles for operational technology (OT) to assist operators in safely adopting artificial intelligence in critical-infrastructure settings. The guidance is centered on striking a balance between innovation and risk management as AI technologies become more deeply integrated into OT systems. This information will be particularly relevant for eeNews Europe readers involved in industrial, embedded, or automation systems, as AI-driven control and monitoring continue to expand throughout OT networks.
The Cybersecurity and Infrastructure Security Agency (CISA) and the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC) have jointly released the Principles for the Secure Integration of Artificial Intelligence (AI) in Operational Technology (OT), with backing from cybersecurity entities across Europe, North America, and the Asia-Pacific region. The document outlines four core principles designed to help OT owners and operators comprehend risks and implement AI in a controlled and resilient manner.
“AI has the potential to significantly improve the performance and resilience of operational technology environments, but this potential must be met with caution,” stated CISA Acting Director Madhu Gottumukkala. “Operational technology systems form the foundation of our nation’s critical infrastructure, and the integration of AI into these environments necessitates a thoughtful, risk-aware approach. This guidance provides organizations with practical principles to ensure that AI adoption enhances — rather than compromises — the safety, security, and reliability of essential services.”
The guidance specifically addresses the deployment of machine learning and large language models, such as AI agents, and clarifies that the principles are applicable to systems utilizing traditional statistical models or rule-based automation.
CISA and ASD’s ACSC recommend that organizations begin by familiarizing themselves with AI, training technical teams and operators on AI risks, impacts, and secure development practices. They also advise operators to evaluate AI use cases in OT environments, considering technical feasibility alongside data security requirements and preparing for integration challenges in the short and long term.
The third principle emphasizes the importance of robust AI governance, which involves ongoing model testing and rigorous compliance monitoring. The final principle underscores the necessity of integrating safety and security into every AI project, promoting transparency, operator oversight, and close alignment with existing incident response plans.
Cybersecurity agencies from North America, Europe, and the Asia-Pacific region, including the NSA’s AI Security Center, the FBI, Canada’s Cyber Centre, Germany’s BSI, the Netherlands’ NCSC-NL, New Zealand’s NCSC-NZ, and the UK’s NCSC, collaborated on the guide to enhance AI security in critical-infrastructure systems. Readers can access the complete guidance and related resources on CISA’s Artificial Intelligence and Industrial Control Systems webpages.