June 16, 2024
5 min read

Critical infrastructure

AI for critical infrastructure

The use of AI in critical infrastructure (CI), including energy and life support facilities, security and control systems, transport and logistics, and many others, is essential for driving digital transformation, enhancing competitiveness, and ensuring the sustainable development of these industries. However, AI systems used in CI are subject to stringent security requirements to mitigate the many negative consequences of incorrect or uncontrolled behavior. These standards and requirements are reflected in the AI regulations of many countries where key rules and recommendations are being developed [1-2], and several regulatory documents have already been adopted [3].

AI typically has two main roles in CI facilities:

  • To build functional safety systems for CI facilities
  • To develop (semi)automatic control of CI facilities

In the first case, AI systems can directly ensure the safety of certain processes and/or objects within CI enterprises, provide information for safety functions, and serve as tools for designing and developing safety systems. In the second case, AI systems are used for full or partial control and management of processes and/or objects within CI, typically operating in a feedback loop that involves evaluating the status of the CI object, generating control actions, applying control to the monitored object, and assessing its new state.

There's no doubt that in both cases, incorrect operation of the AI system - regardless of the reasons - can have catastrophic consequences for the CI object. Therefore, ensuring the security of these AI solutions is crucial. The main risks to which AI systems in CI are exposed, aligned with the key stages of their lifecycle, are as follows.

Security requirements for AI systems in critical infrastructure

Secure data

The first stage of creating an AI system is typically preparing data for its training. The data must meet several requirements, including [4-6]:

Accuracy

Completeness

Security

Lack of bias

Any deviation from these requirements for training and validation data increases the risk of creating AI models with low accuracy, poor reliability, and weak generalization properties [5].

To create and train a secure AI model, the following measure are essential:

  • Develop it in protected software environments
  • Use only trusted software
  • Avoid using ready-made solutions from untrusted third-party sources

Failure to following these practices can lead to models containing undeclared components that enable data theft, changes in model behavior triggered by external factors, and many other vulnerabilities. During training, the model must also be protected from input data attacks [1], so-called ‘adversarial attacks’. Another requirement for AI systems in CI is for a degree of transparency in the model's logic so it’s interpretable by humans.

Safe operation of AI models

After creating the model, it must undergo rigorous and comprehensive testing following a specific methodology [5]. For models with vulnerabilities that pose the greatest threats, external testing by AI security experts may also be necessary. Furthermore, for AI-based control systems, a procedure for transferring control to an external agent (either a human or a non-AI algorithm) must be designed and implemented.

The above highlights only the primary risks and requirements for secure AI systems in critical infrastructure. It's important to note that even relatively lenient AI regulations in certain countries still place such systems in a distinct category (so-called high-risk AI systems [7]) and require meticulous management for the associated risks.

Protection of AI systems for critical infrastructure

Mitigating risks at the data preparation stage involves adhering to data handling methodologies [3-5] or consulting with AI security experts. For the model creation stage, all models must undergo a checking procedure for undeclared functionality. The model must be developed and tested in a secure development environment, and the code of the trained model encrypted. The training stage includes adversarial training, and if the most likely threat vectors for the input data are identified, additional protection measures can be integrated into the model. At the inference stage, when the model is run on specific hardware, threat analysis must be conducted for that equipment, taking into account the manufacturer's recommendations to mitigate threats.

If you have any questions, please contact us: aist@kaspersky.com

References

Expand