In this cross-interview, Timo Lenk and Martin Lestra discuss how disinformation is spread, what risks it poses, and how their respective projects contribute to exposing and combating manipulative narratives.
Timo Lenk and Martin Lestra are both engaged in the fight against disinformation, though from different angles. Lenk, a postdoc-researcher at TU Dortmund University, contributes to ADAC.io, a Horizon Europe project aimed at strengthening democratic resilience against Foreign Information Manipulation and Interference (FIMI). Lestra works at the French company opsci and coordinates the PROMPT, a pilot project co-financed by the European Commission that focuses on detecting and decoding disinformation narratives and campaigns.
Notably, Lenk analyzed manipulative communication targeting EU climate policies in connection to the 2024 European Parliament Elections. In a case study, he highlights strategic manipulation tactics aimed at polarizing public discourse. His insights align with the PROMPT project, which focuses on detecting and combating disinformation narratives.
In this cross-interview, both discuss how disinformation spreads, the risks it poses, and how their respective projects contribute to detecting and countering manipulative narratives.
The first ADAC.io case study on “Strategic manipulation in the context of the 2024 European elections” illustrates how these behaviours affected the 2024 European elections. You show one example applied by Russian propaganda outlets, which is “to amplify existing disinformation and propaganda stories” by quoting “nationalist and right-wing politicians and media sources from within the EU who are criticizing climate policies”. Such techniques of a “repeated exposure to certain messages potentially increases their perceived truthfulness”. What are the most common themes or patterns you’ve identified in disinformation campaigns targeting democratic processes taking the example of the European elections?
Timo Lenk: The research revealed three main patterns. (a) Drawing on existing fears and grievances, such as the fear of economic decline, job loss, rising prices, etc. and connecting it to allegedly misguided EU policies. For example, narratives painted the deindustrialization of the EU due to energy transition policies. (b) Responding to breaking news events to benefit from media attention cycles, such as the farmers protests ahead of the elections. (c) Merging propaganda messages such as attacks on EU institutions and policies with war propaganda in the context of the Russian full invasion of Ukraine. We saw that in all three cases, narratives draw on negative emotions, especially fear, hate, and outrage.
Martin Lestra: The narratives Timo Lenk mentions are central to PROMPT. We track what is being said about the war of aggression against Ukraine, LGTBQI+, and of course, EU elections. We see that there are many disinformation “coals” on which nefarious actors - foreign regimes and their domestic proxies - can “blow on”, especially in the tense electoral period. Picking up divisive issues in each national context - farmers’ protests for example - is a very efficient strategy, compared to inventing completely new topics (these make the news or are shared on private messaging apps rather because they’re outrageously absurd). We also observe that disinformation actors like to mix issues - they talk about the war in Ukraine, LGTBQI+ and the EU elections. This means that we see less narratives on the EU electoral process itself than on other themes, such as the EU’s putative responsibility in the war in Ukraine, the EU’s role in the impoverishment of farmers, its neocolonial project in Africa, etc. We also observe that the “common themes” are in fact quite few. There are few narratives - or “mega-stories” - but many variations of them. What is said about farmers’ resentment against the elites in France looks a lot like what is being said about farmers in Poland or Spain. And what fact-checkers will also tell you, because they do this day in, day out, is that these narratives will often remerge, like old hats, in future elections! In other words, the disinformation market is volatile, but some of its basic mechanics are always working in the background, giving new shapes to old stories.
Disinformation narratives carry significant risks and, especially around elections, they can mislead voters and undermine the foundations of our democracies. The ADAC.io case study emphasizes: “The results shed light on a diverse disinformation and propaganda ecosystem blurring the lines between FIMI and domestic information manipulation”. How do manipulative narratives like "green tyranny" gain traction, and what role do digital platforms play in amplifying them?
Timo Lenk: Our case study suggests that propaganda outlets and as far-right media and blogs draw on similar disinformation themes in their narratives, such as the decline of Europe or the EU oppressing its member states. This is also known from prior research. We find that narratives apply frames and distinct metaphors such as the “green tyranny”, presumably to induce strong negative emotions and alienate EU citizens from EU institutions and aggrieve publics. Databases such as EUvsDisinfo by the East Stratcom Task Force of the EEAS but also a growing body of research from different disciplines reveal how basic narratives reoccur in different incidents of information manipulation.
Now turning to the second part of the questions: Digital platforms amplify such narratives in two ways. First, their platform structures favor the emergence of echo chambers, where like-minded individuals evade corrective mechanisms such as fact checking or counterarguing. Second, platform algorithms favor content that attracts the greatest attention to keep users on the platform; and the content that receives the greatest attention are emotional and scandalous. As long as large social media companies do not change their business models, their platforms will continue to be ideal ground for inflammatory content.
Martin Lestra: The more we work on disinformation, the less we are concerned with the opposition between FIMI, DIMI, etc. Often this distinction seems to be more of an (important) distinction in the mandates carried out by organisations. In any event, it’s very important to understand how narratives propagate across social media platforms, across different networks, and different languages. We worked a lot on the connections between “French” X and “Russian” Telegram for example. We know that digital platforms help narratives gain traction. But it’s not only about the things we measure online - the number of likes, of comments. It’s also because everybody is so interested in what is going on social media. How many times do you see news coverage starting with, or including, a sensational and mega-viewed social media post? Digital platforms make narratives popular also because they infuse mainstream media. We know much less about how social media affects our opinion than we do about watching TV and reading the news. Fact-checkers know this very well because they make increasingly difficult decisions about whether suspicious narratives should be “checked” or “ignored”. But our digital bias is constantly being triggered. Platforms know this, nefarious actors too. And we think we should work more and more towards understanding the connections between digital platforms, websites, and mainstream media.
The PROMPT project, for example, uses Large Language Models (LLMs) to deepen the monitoring of the emergence and the spread of narratives. One of the project’s key questions is: How, for instance, does disinformation circulate across social platforms with “universal” rhetorical patterns or with local language and culture idiosyncrasies? Keeping this in mind, how can projects like ADAC.io and PROMPT contribute to detecting disinformation early and developing effective responses to mitigate its impact?
Timo Lenk: In the ADAC.io project, one of our aims is to advance methods of information manipulation analysis and contribute to a shared terminology among analysts, be they researchers, journalists, or OSINTers. For example, the freely accessible DISARM Framework managed by the DISARM Foundation is a large incident-based framework for the analysis of information manipulation, which our project partners are working on optimizing.
Employing LLMs to narrative analysis is also an exciting approach! In the future, I think two aspects will be crucial. First, there should be more exchange between groups and projects working on different approaches to detect and counteract disinformation and manipulative narratives and, ideally, to interlock different tools. Second, we need a whole-of-society approach to strengthen the long-term resilience of society. For that purpose, the first thing we need to do is increase problem awareness among different stakeholder groups, which we try to add to through workshops within the scope of the ADAC.io project. Academic institutions such as universities and schools as well as media organizations and journalists are among the most important stakeholders in this regard. If you are a journalist or working for a nonprofit organization, you are welcome to contact me for more information on our workshops!
Martin Lestra: Let me add on the role of LLMs. We know more and more about the power of LLMs to support the fight against disinformation. The field is moving fast: we’ve moved from being able to detect simple things in text to detecting subtle things across texts, images, videos; and in many more languages, though this is far from perfect. We are better at detecting irony, humour, specific cultural references (like coded language), at dissecting the style in which social media posts are written, at the types of emotions they build on. It’s quite fascinating. But if technological progress is useful, it’s not sufficient if we want to develop more effective responses to mitigate the impact of disinformation. We need people who are able to use these tools, understand them, and can deploy them at scale. In other words, our work on LLMs is one brick to give more capabilities for those who can investigate the digital space in search of malicious behaviours. Ultimately, we should be empowering many more people to critically reflect on what they encounter online (as we do offline). This is one part of the PROMPT project, but there is of course a lot to be done on media literacy that goes well beyond the field of LLMs. We hope to contribute more and more to this work.
There are many academic efforts to develop cutting-edge methodologies to analyze narratives. For example, the PROMPT project aims to fill the gap and includes additional effort to make research operational for those who fight disinformation every day. And now, looking into the future: What role does interdisciplinary collaboration play in combating disinformation, and how can media, academia, and policymakers work together more effectively.
Martin Lestra: The concept of “whole-of-society”, which Timo mentioned, has become tedious jargon to many, but that doesn’t make it less relevant! In PROMPT, we try to embody that approach by bringing together fact-checkers, activists, academics, and industrial actors. Part of the challenge we see is to overcome the polarizing role AI is playing amongst those who are best-placed to use it. We see that many fact-checkers are, with reason, skeptical or opposed to using AI in their work. I’m not even speaking of the broader public! This means doing “in-reach” to those who are part of our project, in addition to outreach.
We work at the intersection of culture and technique, which requires different views on things. For example, my team brings 20+ years of empirical research on platforms, on the identifications of communities, key opinion leaders, etc. Our consortium partners bring other things to the table. The added-value is not only conceptual, it’s really hands-on: What is a useful AI-powered PROMPT feature we could develop? What should it help with and what is unnecessary? What should it look like? How useful will it be in 6 months, in a year? If we zoom out, this whole-of-society approach requires a more common language across disciplines and professions to document what is going on. A “narrative’ does not always mean the same thing to two different people. Last, to be more effective, we need a more shared understanding of the impact, or harm, of online disinformation. In many cases this impact is assumed rather than demonstrated. Interdisciplinary collaboration is really key to construct, test and promote shared indicators in our field.
Timo Lenk: Yes, I agree, a whole-of-society approach is crucial. We need the collective intelligence from all the areas you mentioned, academia, media, nonprofits. Similar to Martin’s perspective, I also experienced this first-hand as part of the ADAC.io project: One of the first things I did was taking part in an analyst training in Lithuania. We were invited by the NGO Debunk.org, one of our project partners. The participants were mostly Polish and German researchers from the social sciences and humanities, engaging in a discussion on disinformation and manipulative narratives with analysts from the nonprofit sector who practically uncover incidents of information manipulation and report them to the authorities. Although we brought quite different perspectives, the exchange was very fruitful. And I continue to experience these synergies between academia and professionals from different fields in the various collaborations within the project. In my view, promoting exchange and collaboration across disciplines and sectors is crucial to make civil society more resilient to information manipulation.
Copyright: European Journalism Observatory (EJO)
The Institute of Communication Studies (ICS) is a member of the European Journalism Observatory (EJO). The views expressed on this page are those of the authors and do not necessarily reflect the views, policies, or positions of EJO and ICS.
Merle van Berkum received her doctorate from the Department of Journalism at City, University of London, with a thesis on international climate reporting. She currently works as an academic project manager at the Erich Brost Institute for the "AMAZE!" project and as a senior researcher at the European Narrative Observatory.