Since the start of its full-scale invasion of Ukraine in February 2022, Russia has continued its efforts in various areas to attack and weaken Ukraine, including in the information space. The arsenal of Russian tactics and methods remains diverse — from fake videos to operations under false flags — but at the same time, it evolves alongside technological developments.
At the same time, artificial intelligence and its practical applications, such as text-generating chatbots or diffusion models used to generate images, have become available to everyone beyond the usual group of researchers and researchers in private companies and leading universities. With all its positive aspects, generative AI has been adopted by disinformation actors to speed up their work.
Although AI has not (yet) surpassed traditional methods of creating and spreading disinformation, in this article, I will examine real-world applications of artificial intelligence in the Kremlin's information operations in text, audio, and video content, as well as automation, to demonstrate areas where these tools are already being used by the aggressor state.
Video
The use of artificially altered videos began almost immediately after the invasion, when on March 16, 2022, Russian sources hacked the website of a Ukrainian media outlet and uploaded a poorly made fake video about Ukrainian President Volodymyr Zelensky. In the video, a fake version of Zelensky claimed that he had resigned, fled Kyiv, and called on the Ukrainian Armed Forces to surrender and save their lives. The extremely low quality of the video clearly indicated that it was a deepfake, where the president's face was superimposed on his body. Therefore, even pro-Kremlin sources on Telegram began to call it fake, while boasting that the video had nevertheless “managed to force some Ukrainian soldiers to surrender.”
However, technology is advancing, improving the quality of video fakes. For example, Russian Telegram channels distributed a TikTok video allegedly showing soldiers from the 117th OBTrO [Separate Territorial Defense Brigade — editor] calling on Ukrainian General Valery Zaluzhny to remove the current government. The brigade later denied the authenticity of the video and published a statement claiming that the people depicted in the video had never been part of the brigade. The Center for Countering Disinformation suggested that the video was created using deepfake technology.
An example of video manipulation using face masks (TikTok)
A separate example of the use of both video and audio manipulation is lip-syncing, a method in which a person's voice is cloned and only the lower part of the face is changed to match the words being put into the speaker's mouth.
One striking example of such a campaign was the fake speech by Oleksiy Danilov, former secretary of the National Security and Defense Council. A number of Russian propagandists published an interview with Danilov in which he allegedly takes responsibility for the terrorist attack in Crocus City Hall and promises more “fun” for Russians. However, Danilov did not give an interview to the telethon on the day of the attack and never said those words. His facial expressions do not quite match his words, and the fabricated video was created from two different videos — interviews with Kirill Budanov and Danilov given to different media outlets.
In another case, a video appeared in which a man under arrest confessed that Ukrainian intelligence had ordered him to kill American far-right media personality Tucker Carlson during his visit to Moscow in February 2024. Later, in a report by Polygraf, it was determined that the person whose voice can be heard in the video shows signs of “digital manipulation.” Moreover, his facial expressions do not match the words he is saying.
Although the use of video is quite common, the technology is still not perfect, and similar disinformation messages are quickly debunked by fact-checkers.
Audio
The use of audio is becoming more widespread and works on several levels. First, it involves cloning a voice to generate the words a person speaks. This technology has positive uses, such as generating the voices of deceased actors for new roles and cameos. One example is the Ukrainian startup Respeecher, which brought back to life the voice of actor James Jones, who voiced Darth Vader in Star Wars, for a new role.
However, the aggressor state uses similar technologies for its own purposes. For example, a fake audio recording of Texas Governor Greg Abbott was once used, taken from a Fox News interview about US immigration policy. Russian sources published a modified version of the interview on Telegram, with the governor's voice altered, while the video was edited to show additional footage, known as b-roll, when the speaker is not in the frame. The fake audio recording claims that former US President Joe Biden should learn from Russian President Vladimir Putin “how to work in the national interest.” In the original interview, the governor did not mention Putin or Biden, as confirmed by representatives of Fox News and the governor's office.
The other side of the coin is the use of synthetic audio to dub videos in order to hide the accent or nationality of the video's producer. In particular, this tactic was used in what was later recognized as the largest influence operation exposed on TikTok, in which DFRLab and BBC Verify jointly investigated thousands of propaganda videos in seven languages accusing the Ukrainian leadership of corruption. In other words, propagandists prepared text that was voiced by artificial intelligence with a neutral accent, concealing the origin of the campaign. This audio was superimposed on a set of images that supposedly “proved” the wealth of officials, allowing their Russian producers to avoid immediate detection.
Text
One of the most obvious uses is translation. Although at the beginning of the invasion, written Russian disinformation in Ukraine was usually ridiculed for its grammatical errors, the quality of the texts improved significantly later on. For example, the “Doppelganger” campaign promoted negative stories about Ukraine with imperfect but plausible translations that looked better than previous efforts. According to Recorded Future, generative AI was used to create articles for English-language websites linked to the Doppelganger campaign, such as Electionwatch.info, meaning that original news stories from websites were rewritten by AI with a predetermined bias or slant. Large language models are also used to generate comments. For example, the OpenAI report includes an example of how the company's specialists noticed and stopped the use of their model in an information campaign to generate comments for social networks.
Not directly related to text generation is the problem of contamination of large language models that are trained on text from the internet. For example, a study by NewsGuard shows that large language models can refer to Russian or pro-Russian disinformation outlets when users ask models and chatbots highly specialized questions. Some researchers have linked this to the Pravda propaganda network, which pollutes Wikipedia, but the problem is broader, as language models can collect information from and refer to Russian state media, which publishes hundreds of posts a day, some of which are quotes from officials and blatant manipulation and disinformation.
Automation
Artificial intelligence is not only used to generate content but also helps build infrastructure for spreading disinformation through artificially created social media accounts.
In July 2024, the US Department of Justice announced an operation to prevent the operation of a network of AI-enhanced artificially created accounts originating in Russia to spread propaganda and disinformation in the US. According to the agency's documents, the software used by the Russians contained AI components that generated images and text.
Thus, AI has many applications in the organization, preparation, and execution of information operations. Malicious actors use AI to create social media accounts, rewrite content for their websites, generate content for comments, and manipulate video and audio material to influence users and spread false information, mistrust, and despair. However, it is worth noting that AI is only a tool that can be used not only to spread disinformation, but also to detect and counter these threats. Startups such as LetsData use artificial intelligence for early detection of information threats, enabling them to counteract them more effectively.
Copyright: European Journalism Observatory (EJO)
The Institute of Communication Studies (ICS) is a member of the European Journalism Observatory (EJO). The views expressed on this page are those of the authors and do not necessarily reflect the views, policies and positions of the EJO and the ICS.
