• Neeve
  • Posts
  • 🤖 AI Integration Rules for OT Systems

🤖 AI Integration Rules for OT Systems

Global cyber agencies publish joint guidance outlining four key principles for secure AI integration with operational technology in critical infrastructure.

Welcome to your essential briefing on threats at the intersection of cybersecurity and critical infrastructure, brought to you by Neeve, the edge cloud security platform for smart buildings, making built spaces secure, intelligent, and sustainable.

This Week’s Cyber Insights

🛡️ Global Cyber Agencies Issue AI Security Guidance for Critical Infrastructure OT

Cyber agencies from the United States, United Kingdom, Canada, Germany, the Netherlands, and New Zealand have published joint guidance outlining four key principles for the safe and secure integration of artificial intelligence with operational technology environments in critical infrastructure.

  • The 25-page document titled "Principles for the Secure Integration of Artificial Intelligence in Operational Technology" addresses how AI can benefit ICS environments—from training models on sensor data to identify deviations, to anomaly detection in PLCs and RTUs, and predicting equipment maintenance requirements.

  • The first principle focuses on understanding AI's unique risks including cybersecurity threats that lead to system compromise, disruptions, and functional safety impacts, as well as issues from low-quality training data and model drift causing inaccurate alerts and reduced system availability.

  • Employee overreliance on AI automation can lead to skill erosion and skill gaps—workers no longer able to manage systems during AI failures or incorrectly managing situations due to misinterpretation of AI outputs, requiring critical education on AI limitations.

  • The second and third principles address determining appropriate AI business use cases compared to other solutions, establishing governance mechanisms, integrating AI into existing security frameworks, and conducting thorough testing and regulatory compliance evaluations.

  • The fourth principle covers oversight and failsafe practices including continuous monitoring, validation, and refining of AI models while embedding safety systems and failsafe mechanisms throughout the operational technology environment.

🤔 The Bigger Picture: 

Building automation systems increasingly integrate AI for HVAC optimization, predictive maintenance, and anomaly detection—but without proper governance and failsafe mechanisms, these systems introduce cybersecurity risks and functional safety impacts. Facility operators should clearly define roles and responsibilities throughout AI system lifecycles, educate staff on AI limitations to prevent skill erosion, establish monitoring mechanisms to detect model drift, and ensure workers can manually manage systems during AI failures before deploying autonomous decision-making in critical infrastructure.

🇨🇳 China-Backed BRICKSTORM Malware Targets IT and Government Networks

CISA warns that Chinese state-sponsored actors are deploying the highly evasive BRICKSTORM malware to infiltrate IT organizations and government services, enabling long-term access, data theft, and potential sabotage through advanced backdoor capabilities.

  • BRICKSTORM functions as an advanced backdoor for VMware vSphere and Windows environments designed to maintain stealthy access while facilitating command and control operations.

  • The malware employs complex evasion techniques including multiple encryption layers, DNS-over-HTTPS (DoH) to conceal communications, SOCKS proxy for lateral movement, and self-monitoring capabilities that automatically reinstall if disrupted.

  • Attackers compromised web servers within DMZs, moved laterally to VMware vCenter servers, and deployed BRICKSTORM to harvest credentials from system backups or Active Directory databases, then steal VM snapshots and create hidden "rogue" VMs to evade detection.

  • CISA recommends network defenders actively hunt for intrusion signs using specific YARA and Sigma rules, block unauthorized DoH traffic, maintain strict inventory of network edge devices, and enforce robust network segmentation.

🤔 The Bigger Picture: 

Building automation systems and facility management platforms increasingly rely on VMware infrastructure for virtualized operations. BRICKSTORM's ability to create hidden VMs and harvest credentials from vCenter servers means attackers could gain persistent access to HVAC controls, access management systems, and energy monitoring platforms. Facility operators should immediately inventory all VMware deployments, block DNS-over-HTTPS traffic that bypasses network monitoring, segment building automation networks from IT infrastructure, and hunt for unauthorized virtual machines that could provide attackers with long-term access to critical building systems.

⚡ PromptPwnd Flaw Exposes AI Build Systems to Data Theft

Aikido Security has discovered a critical vulnerability called 'PromptPwnd' that affects AI-driven build systems integrated with GitHub and GitLab, enabling attackers to steal sensitive data from major companies.

  • Prompt injection flaw targets AI-driven build systems on GitHub and GitLab platforms

  • Vulnerability allows attackers to exploit systems for sensitive data theft

  • Major companies utilizing these platforms face significant breach risks

  • Flaw highlights security gaps in AI development pipelines

  • Immediate security measures needed to prevent severe data breaches

🤔 The Bigger Picture: 

As building management systems increasingly integrate with AI-driven development platforms, this vulnerability shows how supply chain attacks can target the tools we use to build and deploy smart building technologies. Organizations should review their CI/CD pipeline security immediately.

Further Alerts & Insights

🏭 Threat Landscape Grows Increasingly Dangerous for Manufacturers

Manufacturers remained the top target for cyberattacks in 2025 with half falling prey to ransomware and paying average $1 million ransoms plus $1.3 million recovery costs. Exploited vulnerabilities became the most common root cause for the first time in three years, while top-cited breach reasons included lack of security expertise, unknown cybersecurity gaps, and failure to adopt necessary protections. Jaguar Land Rover's September ransomware attack cost $1.7-2.4 billion as production shuttered for nearly a month.

⚡ US Energy Crisis Threatens AI Competition with China

A critical energy shortfall threatens America's AI development capabilities against China's 10,000+ TWh annual production. With US power grids at capacity and proposed nuclear plans adding only 100 TWh, the sustainability of AI growth raises national security concerns.

🤖 Teen Uses ChatGPT to Hack App, Steal 7.2 Million Records

A 17-year-old hacker was arrested for using ChatGPT-generated code to exploit the Kaikatsu CLUB app, compromising over 7.2 million users' data. This case highlights the growing risks of AI-assisted hacking techniques and mobile application vulnerabilities.