Protecting
Artificial
Intelligence

Ensuring the secure development and application
of artificial intelligence (AI) technologies in modern
informational and industrial environments.

Goals and areas of work

Kaspersky AIST

Secure AI for users
and developers

Protecting AI systems involves implementing additional information security measures that take into account the unique and specific features and vulnerabilities of AI components: data, models, and computing platforms.

аист летит

Areas of work

Иконка партнёрства

Our partner is one of Russia's leading research organizations in AI security: the Trusted Artificial Intelligence Research Center based at the V.P. Ivannikov Institute for System Programming of the Russian Academy of Sciences (ISP RAS).

AI threat landscape

AI in industry:
applications and risks

Regulation

AI security services

Kaspersky AIST

We provide reliable protection for AI systems, safeguarding data, business processes, and AI infrastructure against potential threats.

Threat landscape analysis

Analysis of potential attack vectors on the customer’s AI solution, based on the specific risks of the field of application. We provide a detailed report with actionable recommendations for addressing threats and improving the overall security of the solution.

Analysis of input and training data for AI models

Assessment of vulnerabilities in the AI model to adversarial and other input data attacks, ensuring the training dataset complies with security requirements. A comprehensive report outlines vulnerabilities and provides recommendations for mitigating risks.

* In some cases – such as large language models (LLMs) – additional research projects may be conducted.

AI model analysis

Detection of malicious functionality in AI models, including trojans and other harmful modules embedded in deep neural networks*. Our analysis includes a detailed report is report on the presence or absence of malware, as well as malware localization and removal – all without compromising the model's performance.

* Requires access to the model's source code

Monitoring the security of AI solutions

Evaluation of the security of AI solutions and models during operation to identify undeclared functionality or malicious activity. If direct analysis is not feasible, indirect signs are used for detection. We provide a report detailing the tests conducted, addressing the presence or absence of undeclared functionality.

Developing secure AI models

Design and training of AI models with enhanced resilience against attacks, using adversarial training, trusted AI models, specialized (neuromorphic) hardware, and other proactive protection methods. Deployment and distribution of trusted development environments for AI models.

Expert analytical support in AI regulation

Development of AI implementation strategies that take into account legal and regulatory requirements. Services include auditing AI solutions for compliance and contributing to the development of industry standards.

Consulting services and research on security for AI

For organizations new to security for AI, our team offers initial consultations tailored to your specific needs. We also conduct in-depth security analyses of specific AI solutions or approaches of interest.