AI regulation in the EU
The European Union (EU) is at the forefront of AI regulation. By June 2024, a foundational legislative framework has been established, with additional mechanisms and acts under development both at the EU level and within individual member states. This progress reflects the rapid advancements in AI technologies over the past decade, particularly the rise of generative AI capable of creating content often indistinguishable from human output. While these technologies offer advantages, they also post numerous risks to individuals. Consequently, the primary objectives of AI regulation are to safeguard inalienable individual rights and freedoms, enforce compliance with ethical and moral standards, and mandatory risk assessments for AI solutions.
ALTAI risk assessment mechanism
Every AI solution undergoes an initial assessment using the ALTAI (Assessment List for Trustworthy Artificial Intelligence) document. ALTAI consists of a series of questions for AI solution developers to identify potential risks the solution may pose to humans or the environment.
The document was created by the AI HLEG (High-Level Expert Group on Artificial Intelligence) which has the authority to modify and update it. This group has also contributed to drafting regulatory acts concerning so-called ‘high-risk AI systems’, as discussed below.
Regulation of high-risk AI
Responding to the ALTAI questions provides a rough idea of the risk category of the risk category of an AI solution, specifically whether it poses a particular risk. AI systems are classified into four categories: "unacceptable risk", "high-risk", "limited risk" and "minimal risk".
AI systems deemed to pose unacceptable risks are entirely prohibited (Article 5, Artificial Intelligence Act, https://artificialintelligenceact.eu/). These include social scoring systems, AI used to manipulate human behavior, AI that exploits vulnerabilities of specific demographic groups, and others.
High-risk (HR) AI systems are subject to special regulations outlined in specific sections of the AI Act. These include:
- A comprehensive description of the lifecycle of HR AI solutions
- Key definitions (Article 3), concepts and principles (Article 8) governing HR AI regulation
- Classification and typology of HR AI solutions (Articles 6, 7, and Annex III)
The AI Act further sets out numerous requirements for HR AI systems, including:
- A dedicated risk management system (Article 9: Risk Management System)
- Data requirements and management policies (Article 10: Data and Data Governance)
- Logging operations (Article 12: Record-keeping)
- Comprehensive documentation (Article 18)
- Transparency and human control (Articles 13, 14)
- A quality management system (Article 17: Quality Management System)
A separate clause mandates that all HR AI solutions, whether developed within the EU or imported, must undergo a conformity assessment (Article 43: Conformity Assessment, and Annexes V–VII) to ensure compliance with the Act (Article 26: Obligations of Importers). Successful assessments result in certification (Article 44).
Providers of HR AI systems must meet various obligations (Article 16). An EU-wide database of HR AI systems (Article 71) is being introduced, with a recommendation to continue monitoring these systems even after they are withdrawn from the market (Article 72: Post-Market Monitoring). Significant penalties for non-compliance are defined (Article 99: Penalties).
The concept of AI sandboxes (Article 57: AI Regulatory Sandboxes) is introduced to allow testing AI systems before full deployment, with the testing process itself also regulated (Article 58).
Regulation of general-purpose AI
General-purpose AI systems, such as widely used models like ChatGPT, are categorized separately (Chapter V). If these systems are not part of a high-risk AI system (in which case it would be regulated under the high-risk category), they must comply with the following requirements (Chapter V):
- Undergo a conformity assessment procedure
- Include comprehensive technical documentation
- Provide all necessary information to users deploying the system
In addition, certain provisions for HR AI systems also extend to general-purpose AI systems.
Regulation of low-risk AI
AI solutions not classified as high-risk are not subject to strict regulations. However, their use is guided by various standards developed by the ISO/IEC Artificial Intelligence Committee (https://www.iso.org/committee/6794475.html). These standards can be broadly divided into two categories:
- General standards, which focus on data management and AI models, ensuring their accuracy, robustness, and transparency
- Industry-specific standards, tailored to the unique needs of specific sectors deploying AI
It's worth noting that the EU standards overlap significantly with Russian national standards, facilitating mutual understanding.
Future steps to build trust in AI systems in the EU
Let's note two important additions to AI regulation in the EU, as outlined in the Coordinated Plan on Artificial Intelligence (https://digital-strategy.ec.europa.eu/en/policies/plan-ai). It is expected that the following infrastructure will be created by 2026-2027:
- TEFs (Testing and Experimentation Facilities): These facilities will serve as controlled environments – AI sandboxes - for comprehensive testing and evaluation of AI solutions, particularly HR AI systems
- DIHs (Digital Innovation Hubs): These centers will promote the exchange and deployment of reliable and secure AI solutions, driving adoption of trusted AI across all economic sectors
These initiatives have the clear goal of ensuring the development and deployment of AI systems that prioritize safety and reliability.