Countering disinformation with the help of AI: Is it already possible?

Dr. Vuk Vucetic


Tales from the Region



Artificial intelligence can contribute to the generation of disinformation, but at the same time, it can provide potential solutions to countering that same disinformation.

The rapid advance of digital technologies and artificial intelligence (AI) has completely transformed various aspects of everyday life, including ways in which information is produced, disseminated, and consumed. In addition to its obvious advantages, AI also introduces significant challenges, especially in the area of ​​disinformation.

Examples of the use of artificial intelligence for generating fake content are already visible. In the past year, some of the more popular fake content that were created with the help of artificial intelligence (the most common tools for generating photos with the help of artificial intelligence are Dall-E and Midjourney) are a photo of Paris covered in garbage, as well as a photo of Elon Musk walking hand in hand with the head of General Electric, Mary Barrom, who is Musk's main competitor. The photos of Pope Francis in Balenciaga or the false arrest of Donald Trump are also unforgettable. In addition, fake content created with the help of smart technologies is also present on the domestic scene.

For example, the artificial confession of Elon Musk that his origins are from Republika Srpska or Dodik's address in which he says he loves Bosnia and Herzegovina are just some examples of how contemporary technologies can be used to primarily generate satirical and entertaining content.

In any case, the challenges posed by artificial intelligence (AI) in the field of disinformation are multiple and complex. One of the primary challenges in generating and spreading misinformation is in the ease with which fake content can be created. AI algorithms can generate realistic-looking images, videos, and audio recordings, making it difficult to distinguish between real and fabricated content. This blurring of lines between real and fake content complicates the efforts to effectively identify and mitigate the spread of disinformation.

In addition, Artificial Intelligence enables automation of the process of creating and spreading disinformation, which consequently leads to a faster and more massive spread of fake content on the Internet. Algorithms can generate large amounts of disinformation in a short period, making it difficult to respond effectively and counter the spread of false information. Another problem is that algorithms can analyze user data and personalize the disinformation to make it more persuasive and relevant to the target groups of users. This can result in more effective manipulations and an increase in the impact of disinformation on individuals and society as a whole, as they are adapted to the individual interests and beliefs of the users.

Artificial intelligence and creating tensions in Bosnia and Herzegovina

One of the events that marked the previous period in Bosnia and Herzegovina was a poster published by the Faculty of Philosophy of the University of Mostar, which announces the summer school in the field of political science titled Methodology of Political Science: New Issues and Trends. The poster shows a stylized version of Stari most, one of the most recognizable symbols of the city. The problem is that Islamic religious buildings, which are in reality located near the bridge, were omitted from the poster, and were replaced by some other buildings, reminiscent of Christian religious buildings, which, by the way, do not exist in that part of Mostar. One part of the public and the Institution condemned the publication of this poster, claiming that Islamic religious buildings were deliberately left out in order to present the Old Bridge (Stari most), as well as Mostar city as a whole, as part of Croatian cultural heritage. In this sense, in the eyes of critics, the poster shows a tendentious interpretation of reality and deliberately creates tensions that contribute to the deepening of existing differences in this city.

On the other hand, the organizers say that artificial intelligence is to blame for everything because the poster was created using this tool. In this regard, the organizers believe that the amount and intensity of negative reactions following the publication of the poster are excessive and they are an attack on the freedom of speech and thought. Since there were many threats to the organizers, and given the hate speech that was generated after the publication of the poster, the event was canceled.

This case is proof that artificial intelligence can create more problems than benefits. In the case of the controversial poster, the AI-generated image omitted mosque minarets and added objects resembling church towers, misrepresenting the city's multicultural identity. Therefore, artificial intelligence algorithms can intentionally or unintentionally reinforce existing prejudices and divisive narratives. The lesson that can be learned from this situation is that artificial intelligence cannot and should not be taken as a reliable mechanism for generating information and content. Artificial intelligence algorithms are not neutral, but they work by values, attitudes, social prejudices, and political ideas that are loaded into the system, which can contribute to the perpetuation of existing social inequalities and discrimination.

AI algorithms, like any other human-made technology, are inherently influenced by the biases and attitudes of their creators as well as the databases on which the AI ​​is trained. For example, algorithms for evaluating employees or job candidates may reflect patriarchal tendencies in the work environment. For example, if an algorithm is based on historical employee performance data that favors male workers, it may result in a preference for male candidates in hiring or promotion. In this regard, the "pictures of reality" that artificial intelligence has at its disposal can lead to algorithmic decisions that perpetuate stereotypes, reinforce disinformation, or marginalize certain voices and perspectives.

To mitigate risks of bias and ensure that AI serves the public interest, it is critical to prioritize transparency, accountability, and ethical oversight in the design, development, and application of AI systems.

Source: freepik.com

In addition, ongoing efforts to diversify the data sets used by algorithms, mitigate bias, and promote algorithmic fairness are essential in order to address the inherent limitations of the artificial intelligence. We should not forget that journalists and editors or, in the broadest sense, gatekeepers are, ultimately, the ones who are responsible for the content that is published. In this sense, they are required to make an additional effort to verify the credibility of information and content generated by artificial intelligence before it is published. This is of particular importance in the case of divided societies, such as Bosnia and Herzegovina, where even the most ordinary announcement can have different political connotations and be a reason for spreading hatred and raising tensions.

AI from problem to solution

As we have seen, artificial intelligence can contribute to the generation of disinformation, but at the same time, it can provide potential solutions to combating such disinformation. In this sense, artificial intelligence can be used as a tool for automated verification of the accuracy of certain data by comparing it with available information originating from credible and reliable databases. In addition, AI algorithms can detect manipulated images, videos, and audio recordings by analyzing visual and audio characteristics and identifying signs of manipulation. Techniques such as image forensics and analysis based on deep learning techniques can identify manipulated photos, deepfake video content, and other forms of synthetic disinformation and misleading content.

AI monitoring systems can monitor the emergence and development of online trends in real time for identifying potential disinformation campaigns. These systems use machine learning algorithms to analyze news, social media posts, and online discussions to detect signs of disinformation, allowing organizations to take proactive measures to counter fake information.

One of the domestic positive examples that can be useful to different actors is Disinformation Toolkit developed by Albani Associates in partnership with Mediacenter from Sarajevo. The Disinformation Toolkit is a multilingual application available for desktop and mobile devices. This app provides tools and resources to combat disinformation, including an explanation of the key terms related to disinformation, including differences between misinformation, disinformation, malicious information, and fake news.

In addition to tools for combating disinformation, it also provides other useful resources such as Quotext, Diffchecker, TinEye. Quetext is an application that is used for text analysis to detect plagiarism, that is, this application helps to detect whether artificial intelligence was used during the creation of certain textual content, which is useful to detect disinformation generated with the help of AI. Diffchecker works by highlighting differences between two pieces of text and allowing users to quickly spot changes or similarities. This website also offers several other features, including the ability to compare images, PDFs, tables, and other types of documents.

Finally, TinEye is a tool that shows when and where a photo first appeared on the Internet. Also useful are tools that allow searching on social media, and checking profiles and usernames, such as namech_ch or namechecker. Accountanalysis, Truthnest or Twitonomy are also tools that can be used to analyze accounts on the “X” network.

All tools are very useful and are intended for journalists, media workers, and fact-checkers to facilitate the process of identifying and suppressing disinformation. In addition, the Disinformation Toolkit offers access to a network of fact-checking organizations and media resources, allowing users to compare information and verify the credibility of sources. Using tools and technologies based on artificial intelligence, such as integrated fact-checking mechanisms and the possibilities for monitoring social media, the users can verify the information and contribute to maintaining the accuracy of the facts in the public discourse. As AI continues to develop, such tools will play an increasingly important role in protecting the integrity of information and promoting transparency in media reporting.

The fight against disinformation generated with the help of artificial intelligence requires collective action from various actors (government, media outlets, non-governmental organizations). In this sense, three key directions of that combat can be discerned. First, it is important to work on the development of quality and reliable fact-checking mechanisms. In parallel to that, it is important to work on media literacy among young people so they can understand the opportunities but also the dangers that exist in the modern media-shaped reality, that is, to develop certain defense mechanisms against disinformation.

Finally, one of the imperatives is to raise the awareness of professional journalists as well as the common users about the responsibilities they have for publishing, downloading, and spreading content in the online space. With concerted action in these three areas, it will be possible to provide systemic resistance to the harmful effects of disinformation in the AI ​​environment in the long term.


The blog was created as part of the “Tales from the Region” initiative led by Res Publica and Institute of Communication Studies, in cooperation with partners from Montenegro (PCNEN), Kosovo (Sbunker), Serbia (Autonomija), Bosnia and Herzegovina (Analiziraj.ba), and Albania (Exit), within the project "Use of facts-based journalism to raise awareness of and counteract disinformation in the North Macedonia media space (Use Facts)" with the support of the British Embassy in Skopje.

Please refer to the Terms before commenting and republishing the content. Note: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of the Institute of Communication Studies or the donor.

Dr. Vuk Vucetic

Dr. Vuk Vucetic graduated in 2010 from the Faculty of Philosophy of the University of East Sarajevo, majoring in journalism. He completed his master's studies in communication in 2013 at the Faculty of Political Sciences in Sarajevo on the topic "The political spectacle as a media phenomenon - the example of BiH". In 2018, he defended his doctoral dissertation at the Faculty of Political Sciences in Sarajevo on the topic "Controversies of the mediatization of politics in contemporary Bosnia and Herzegovina." Since 2011, he has been permanently employed at the Faculty of Philosophy of the University of East Sarajevo at the Department of Journalism.