Europe’s blueprint shows how to push back against foreign information manipulation

Konrad Bleyer-Simon

Media

02.06.25

Прегледи
 
Content that is harmful but legal needs to be tackled with the freedom of expression in mind. The EU approach provides an overall framework, which needs to be adopted to the specific country setting.

Around the time when Donald Trump was first running for president, a group of youngsters made headlines. The so-called “Macedonian Teens” were smart enough to figure out that they could build a flourishing business around producing fake content on social media and targeting foreign audiences. Without having any (known) political goals, they became one of the actors that prominently featured in the discussions about foreign interference in elections.

I tend to think of these young people as small entrepreneurs, who had a good understanding of the digital market and successfully identified a niche to make profits. Unfortunately, this niche they identified served purveyors of disinformation and foreign propagandists.

As we have learned in the past decade, the operators of online platforms consider disinformation a driver of engagement on their services, thus building their business models around the amplification of all kinds of divisive messages. The opaque algorithms of social media and other large online platforms were not only facilitating the spread of low-quality harmful content but also allowing the producers of disinformation and propaganda to make money with these activities.

An investigation by the Global Disinformation Index found, for example, that in 2021, FIMI-actors Russia Today and Sputnik, as well as the disinformation-publisher Breitbart, earned more than USD 700,000 per month through online advertising services.

What is FIMI?

FIMI (foreign information manipulation and interference) is an elusive and constantly evolving threat, which overlaps with certain other risks to information integrity. In many cases, FIMI is a form of disinformation that clearly originates from sources outside of the EU, however, the term covers not just certain forms of harmful content but also behaviours and operating methods (TTPs – Tactics, Techniques, and Procedures), such as hack-and-leak attacks, deepfakes and coordinated inauthentic activities. Social media is an important terrain for FIMI. The European Democracy Action Plan (2020) mentions that “foreign interference in the information space” is “often carried out as part of a broader hybrid operation, can be understood as coercive and deceptive efforts to disrupt the free formation and expression of individuals’ political will by a foreign state actor or its agents”.

Source: freepik.com

When assessing the foreign actors that can post the greatest threat to information integrity in Europe, the main country of origin of information manipulations mentioned is the Russian Federation – the sources are official state channels (such as government accounts), state-controlled outlets (such as the well-known RT and Sputnik), state-linked channels (for example actors where the affiliation is not publicly disclosed, but ties to the state, intelligence services or people with strong ties to the political elites) and state-aligned channels, where control or funding by the state cannot be proven but the narratives clearly align with those of the previously mentioned actors.

Unfortunately, the narratives of foreign information actors often sound compelling, and attract a lot of attention through their sensationalistic and divisive framing – thereby tempting some media outlets and politicians to apply similar tactics or to amplify certain narrratives.

The European External Action Service (EEAS) found signs of foreign information attacks in several EU member states, as well as EU candidates. The Doppelgänger operation, for instance, created websites mimicking existing mainstream websites across the EU. Between late 2022 and late 2023, high numbers of foreign information attacks were recorded in Poland, Germany and France (more than 20 per country). Some attacks (between 1 and 5) were registered in North Macedonia and all other countries of the Western Balkans as well.

These attempts can seriously undermine societies’ trust in institutions, can trigger interethnic conflicts and compromise the integrity of elections. Due to indications of extensive interferences in last year’s election campaign in Romania, the results of the first round of the Presidential election had to be annulled by the Constitutional Court. Apart from Russia, the EEAs also assesses FIMI originating from China, but such campaigns can originate from a range of other countries as well.

Relevant obligations of digital platforms under EU policies

European policy makers had early on advocated for a European approach in order to avoid a fragmented European policy landscape in light of a border-crossing problem. This rests on the notion that legal content, even if it might be considered harmful “is generally protected by freedom of expression and needs to be addressed differently than illegal content”. To limit the potential harm it may cause to freedom of expression, the EU approach emphasises transparency, and tackles some of the key motives behind publishing and spreading such content – for example the ability to profit from harmful online activities.

The Digital Services Act (DSA) is probably the most well-known tool in the EU’s efforts to limit the spread and prevalence of harmful content. It establishes a framework for transparency and clear accountability on online platforms, especially those that are referred to as “very large online platforms” (VLOPs), such as X, YouTube or TikTok. The regulation, among other things, sets out obligations for these platforms to identify and mitigate systemic risks, such as disinformation, calls for a code of conduct for online advertising, and requires VLOPs to undergo a yearly audit on their own expenses.

Another important tool, the self-regulatory Code of Practice on Disinformation was passed ahead of the European Parliament elections of 2019. As part of this ground-breaking effort, some of the largest online platforms have committed to obligations that were otherwise not required from them by law: they promised they would prevent purveyors of disinformation from generating revenues through their services, limit the use of bots, improve the transparency of political advertising, while at the same time empowering users and researchers.

This year, the Code was transformed into a co-regulatory Code of Conduct under the Digital Services Act, thereby serving as a guidance for platforms’ mandatory risk mitigation efforts. Additional protections of the online information environment in the EU can be found, among other things, in the Digital Markets Act, the European Media Freedom Act, the Artificial Intelligence Act, and the Regulation on the transparency and targeting of political advertising.

How should EU member states and candidates deal with FIMI?

The EU adopted a Strategic Compass in March 2022 with the aim of strengthening the bloc’s security and defence policy by 2030. One of the aspects is a catalogue of instruments to tackle and respond to FIMI operations. The so-called FIMI Toolbox emphasises the need to improve situational awareness (among other things through the systemic collection of evidence and information sharing), resilience building (such as strategic communications), “disruption and regulation” (referring to laws and policies aiming at preventing and deterring or responding to FIMI attacks, including the transparency measures highlighted earlier), as well as diplomatic responses (such as international cooperation or sanctions).

EU regulation, such as the ones listed earlier cover many aspects of the FIMI threat – nevertheless, not even EU governments should regard all EU regulation as self-implementing and sufficient. Many components need to be adapted to the local situation. Moreover, additional measures might be needed on the national level that address countries’ own specifics problems and vulnerabilities, in order to make sure that citizens have access to a safe, reliable and high-quality information sphere. Due to the complexity of the problems related to FIMI and disinformation, it is important to follow a whole-of-society approach, have a long-term strategy, coordinate between policy domains, as well as look for international partners to share knowledge and complement each other’s efforts.

When designing measures to deal with disinformation, Epstein’s four criteria for effective regulation of disinformation can serve as a useful guide. According to him, action a) should keep undesirable side-effects (harm) minimal, b) should be proportionate to the threat, c) able to adapt to changes in technology, and d) determined by independent agencies.

In this process, the involvement of diverse and independent actors is key – it means more than just institutions (media regulators, electoral commissions, educational institutions) but also a strong involvement of civil society, the private sector and the media.

Konrad Bleyer-Simon

Konrad Bleyer-Simon is a Research Associate at the Centre for Media Pluralism and Media Freedom (European University Institute) and works on the Policy Analysis and Research task of EDMO. He’s involved in EDMO’s work on structural indicators and chairs the EDMO Hubs Policy and Analysis Working Group. He holds a doctoral degree from the Human Rights Under Pressure joint program of the Freie Universität Berlin and the Hebrew University in Jerusalem as well as a Master of International Affairs degree from Columbia University. Prior to working at CMPF/EDMO, he worked for NGOs and news media in Berlin, Brussels, Bishkek and Budapest. In his research he looks at media and anti-disinformation policymaking, media capture, as well as new revenue models for news media.