June 16, 2024
4 min read

AI regulation in Russia

AI regulation in Russia at the national level is currently under development.

The cornerstone of this effort is the “National Strategy for the Development of Artificial Intelligence until 2030” document, approved by Presidential Decree of the Russian Federation No. 490 dated October 10, 2019 (http://www.kremlin.ru/acts/bank/44731). The strategy outlines "the goals and key tasks for AI development in the Russian Federation, and its applications in ensuring national interests and achieving strategic national priorities, including in the field of scientific and technological development".

Point 19 of the strategy addresses issues around AI safety. Here are some of the key principles for "the development and use of AI technologies, compliance with which is mandatory in the implementation of this strategy":

Protection of human rights and freedoms

Safety of AI systems

Transparency of AI systems

Security of AI systems

Иконка информации

The implementation of these principles currently involves a range of practical steps to ensure trust and safety in AI systems.

The first such step could be AI solutions' compliance with the AI Ethics Code (https://ethics.a-ai.ru/), which establishes foundational principles for the ethical development, implementation, and use of AI in Russia.

While this code is not mandatory for all developers and users of AI systems, it serves as a set of recommendations aimed at promoting the creation of AI systems that are safe for humans and society. A special commission oversees the implementation of the ethics code. Its responsibilities include assessing the risks and societal impacts of AI systems, evaluating the code’s effectiveness, and compiling best practices for addressing ethical issues throughout the AI lifecycle, such as the ethical use of recommendation services.

The second step could be to test AI systems for compliance with technical standards and regulations.

The Technical Committee on Standardization No. 164 "Artificial Intelligence" (TC 164), comprising 68 specialized organizations, is developing national and international AI standards.

In Russia, more than 100 GOST standards for AI have already been implemented in the following areas:
  • Healthcare
  • Education
  • IT
  • Transport
  • Agriculture

Standards are also in effect for specific applied systems, including situational video analytics, remote sensing data processing, and security inspection tools at airports.

Standards addressing AI safety:
  • Functional safety (PNST 836-2023 (ISO/IEC TR 5469))
  • Ensuring trust in AI (GOST 59276)
  • Evaluating AI system quality (GOST 59898)
  • Assessing neural network robustness (GOST 70462)
  • AI risk management (PNST 776-2022)
  • Big data standards for AI (GOST 70889, 70466, and 59926)
  • Bias in AI systems (PNST 839-2023)
  • Ethical and societal aspects of AI (PNST 840-2023)

PNST 836-2023 (ISO/IEC TR 5469), «Artificial Intelligence. Functional Safety and AI Systems» applies to critical infrastructure systems and includes additional requirements based on the introduced risk classification.

Work is also underway to define the concept of an AI model lifecycle, formalizing the stages and verification methods for AI models.

Russia’s national regulatory framework for AI is constantly evolving, incorporating experience from experimental legal regimes for autonomous vehicles and other automated systems.

A possible scenario for ensuring trust in AI systems:

Security center

On May 22, 2024, with support from the Ministry of Digital Development, Communications, and Mass Media of the Russian Federation, a consortium was established to research AI technology security(https://digital.gov.ru/ru/events/51054/). Members include the National Technology Center for Digital Cryptography (NTC DC), the Academy of Cryptography of the Russian Federation, and the V.P. Ivannikov Institute for System Programming of the Russian Academy of Sciences (ISP RAS).

Conclusions

Russia stands among the global leaders in AI regulation. The goals of AI regulation have already been formulated, a foundational regulatory framework is in place, and supervisory structures and means for monitoring and verifying AI systems are being developed. Unlike the European Union's legislation (https://artificialintelligenceact.eu/), which categorizes AI systems into strict risk groups, Russia's approach focuses only on those areas of application where stringent regulation is deemed critically necessary.

We continue to monitor developments and updates to national AI regulations and will provide timely analytical materials when relevant. For more detailed information on AI regulation in Russia, visit https://ai.gov.ru/ai/regulatory/