Transport
The application of artificial intelligence (AI) in transport covers many areas [1, 2, 3]), contributing to increased efficiency, safety and convenience. These areas include:
Autonomous driving and navigation
Traffic flow optimization
Predictive maintenance
Decision-making support
Transport safety
Passenger service personalization
Given the critical impact of transport system failures on the stability of the entire national economy, the key requirements for AI systems in transportation are trust, safety, and reliability.
In standards that guide the application of AI technologies in transportation—both those in development and those already adopted—the construction and implementation of AI systems must comply with "all necessary safety standards and regulatory requirements to minimize the risk of accidents and protect human life, the environment, and economic interests"[1].
AI threats in transport
We will structure our analysis around the AI system lifecycle.
The first stage involves collecting and preparing data for AI model training. The key data in transportation is vehicle movement data (including real-time data), cargo and freight transport data, environmental data, and infrastructure data.
The confidentiality, security, and quality of data are critical for building reliable AI systems, which is why regulatory documents place special emphasis on data supply chains [4, 5, 6]. Thorough data verification is essential and must be carried out with the involvement of industry experts [1], and in some cases, AI specialists.
The next stage is AI model creation and training. These processes must be carried out in secure software environments using trusted software. As with other critical industries, AI models developed for transportation must be free of undeclared or malicious functions—for example, those that could lead to the theft of telemetry data or changes in model behavior triggered by specific input data. Thorough analysis of a model's internal logic and its components minimizes the risk of hidden malicious payloads.
Additional requirements are placed on AI systems for vehicle control, including testing in digital twins and test sites, operational transparency, interpretability of output (decision) results for the vehicle operator (driver), seamless transfer of control to the driver, and several others. A more detailed discussion of this class of systems is provided in the section dedicated to highly automated automobile transport.

Unfortunately, high-quality training data and an accurate, functionally verified AI model do not yet fully guarantee the safety of the AI system. The fact is, during the operational phase, AI systems are vulnerable to attacks via manipulated input data. For example, the data from certain measurement systems can be deliberately modified so that the AI model's output does not match the true measurements. For vehicles using such measurement data, these attacks are particularly dangerous, as even tiny manipulations can have a significant impact on the AI system. The algorithms for attacks on input data are quite diverse, and their feasibility has been demonstrated many times in practice. One effective means of countering this type of attack is the so-called adversarial training of AI models [7, 8].
It's also worth noting the potential vulnerabilities of AI model inference equipment, but in the context of transportation, this is a difficult topic to summarize due to the huge variety of hardware solutions used. A brief description of successful attacks on AI hardware is provided in the relevant section of the website.
Risks of Generative AI
We have examined the process of creating an AI system and the associated risks, primarily focusing on the fields of autonomous navigation, freight transportation, and monitoring of technological processes in transport. One significant area left outside the scope of this discussion is passenger transportation, where AI can also be used to interact directly with humans.
In this field, in addition to the previously mentioned risks, there are concerns related to the improper functioning of generative AI (large language models) in AI-powered passenger service assistants. Here are just a few possible consequences of such malfunctions:
- incorrect recommendations for route planning;
- abuse of personalization (such as offering a user more expensive tickets under the guise of a best option);
- data falsification (regarding flight delays, available routes, and so on);
- discrimination and bias in recommendations (such as ignoring requests from certain groups of passengers or deliberately providing suboptimal solutions).
The causes of AI failures can vary: they may result from malicious actions—such as poisoning of training data or adversarial input attacks—or from internal AI deficiencies, such as errors in algorithms or data. In particular, data falsification is not always the result of an attack. It can arise due to the so-called "hallucination" effect of a model, where AI "unintentionally" generates false information. Issues related to the use of generative AI are under close scrutiny by regulators in all leading technological countries.
In conclusion, the application of AI in transportation has immense potential but requires careful risk management and multi-layered protection, ranging from sensors and algorithms to human factors, legal, and regulatory measures. Only such a multi-layered approach can unlock AI's potential without compromising safety.
References
Expand
- 1. PNST 866-2023 “Artificial intelligence systems in water transport. Use cases”.
- 2. PNST 884-2023 “Artificial intelligence in the railway industry. Use cases”.
- 3. GOST R 70980-2023 “Artificial Intelligence systems in road transport. Intelligent transport infrastructure management systems. General requirements”.
- 4. GOST R 70889-2023 “Information technology. Artificial intelligence. Data life cycle framework”.
- 5. PNST 847-2023 “Artificial intelligence. Big data. Functional requirements for data provenance”.
- 6. PNST 848-2023 “Artificial intelligence. Big data. Overview and requirements for data preservation”.
- 7. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and Harnessing Adversarial Examples. In ICLR, 2015
- 8. Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. Towards Deep Learning Models Resistant to Adversarial Attacks. In ICLR. OpenReview.net, 2018