Artificial intelligence plays a dual role in disinformation. On the one hand, AI tools and algorithms are key in detecting and combating disinformation by flagging suspicious content and analyzing and identifying it.
The spread of artificial intelligence (AI) technology in recent years has revolutionized various industries, including media outlets in the creation and dissemination of information. While AI offers many benefits in improving efficiency and automation, it presents a challenge when it comes to the spread of malicious information and disinformation.
Understanding the relationship between AI and disinformation is essential in addressing the evolving structure and dimensions of digital disinformation. With the possibilities offered, artificial intelligence can also help in the work of having fair and accurate information, thus reducing time and other journalistic challenges in creating and publishing information.
The use of artificial intelligence in informing
Dukagjini TV was the first television that started to use artificial intelligence in the Debate Plus TV program. Using Shqip.ai platform integration the debate show is transcribed in real-time, which makes it possible for this program to be followed by certain specific categories using the transcription shown on the screen. Thanks to this possibility, these categories of citizens will be able to be informed about the happenings in the country and follow the discussions and opinions about those happenings.
The functions of this platform include several categories of Albanian language usage, which are uploaded or "learned" on this platform. When transcribing the debates, this large language model (LLM) was able to learn other uses of the language, outside of the set categories, that are used by the guests on the program, to enable a more accurate transcription.
This use of artificial intelligence in media outlets can pave the way for the beneficial use of AI in media outlets. The media outlets in general, and the news bureaus in particular, should develop cooperation with artificial intelligence experts and prepare for its integration into the journalistic practice. This will make their work easier and save them time.
AI can be applied to transcribing interviews, writing news stories related to interviews, compiling various reports that take time to read, and generating illustrative photos. In addition, by cooperating with experts in this area, chatbots could be created to help journalists find topics, create interview questions, and write articles.
Finally, the Council of Written Media of Kosovo (SPMK) included it in the Code of Ethics, which means that media outlets must report when they use artificial intelligence. According to the rules prepared by SPMK, the principles of personal data protection, information security and non-discrimination must be respected in the process of creating media content.
This AI-generated content is under the editorial responsibility of journalists and editors who must comply with the Code of Ethics. At the same time, the reader should know which segment and what type of artificial intelligence is used in the content created with AI and how it works. In addition, copyright and other intellectual property rights must be respected. During the interaction of the reader with the artificial intelligence, he/she should be informed in advance that these interactions do not take place with people, and if this interaction bothers them, they should be provided with communication with a physical person.
Source: cpomagazine.com
The role of AI in disinformation
Artificial intelligence plays a dual role in disinformation. On the one hand, AI tools and algorithms are key in detecting and combating disinformation by flagging suspicious content, and analyzing and identifying it (fake news). The use of these tools has helped fact-checkers in their work of identifying and combating fake content. Artificial intelligence algorithms can help filter reliable sources for research and information gathering, compare information between sources, and identify patterns indicating that the content is potentially misleading or fake.
On the other hand, artificial intelligence can be used to create sophisticated disinformation campaigns. AI-generated content, including deepfakes and manipulated images, can mislead audiences by convincingly presenting false information. The rapid advancement of artificial intelligence technology makes it increasingly challenging to distinguish between original and fictional content, blurring the lines between reality and fiction.
The photos of the Pope in a white jacket or from the arrest of Donald Trump were the first cases to foreshadow the phenomenon of AI image generation.
During fact-checking in hybrid.info we came across a dozen of contents which is generated by artificial intelligence and published by profiles and pages in Albanian language on social media, which are used to show a particular situation. However, these images have had great interactivity on social media because of the persuasiveness and emotion they convey.
Artificial intelligence opened up new dimensions in the creation of fake audio and video. Deepfakes, as a synthetic medium, is one of the techniques used that can substitute the faces, voices, and expressions of people in a particular video, so they appear to be speaking or doing something that they have never done. This technology also makes it possible to generate new audio and video recordings that look and sound authentic.
Some deepfake content has been published on social media, such as the video of the Prime Minister of Kosovo, Albin Kurti who speaks in Arabic, or the video with the cloned voice of Liridona Murseli with emotional text, or Elon Musk speaking in Albanian - all published on the social media, specifically on TikTok. However, we can say that this content has an impact on public opinion because of the interactivity it generates.
On the other hand, 2024 is considered an election year all over the world because at least 64 countries (plus the European Union) will have elections, which is why there is concern about the use of artificial intelligence in undermining the integrity of information and elections. According to discussions and public opinion, it seems that Kosovo will also have elections this year, and there are concerns about the creation and distribution of deepfake videos, cloning of politicians' voices, and other forms of manipulation of social media (pages and groups).
Therefore, on the one hand, journalists and media outlet workers should be prepared to recognize and identify these forms of manipulation, and on the other hand, the users (audience) of social media should be familiar with the characteristics of these contents to avoid becoming their prey.
The spreading of disinformation generated by artificial intelligence represents a significant challenge for society. AI-powered disinformation campaigns can manipulate public opinion and undermine trust in credible sources of information. The viral nature of social media increases the reach of misleading content, making it difficult to prevent the spread of false narratives. The lack of transparency in AI-generated content poses a threat to the integrity of media outlets and the information, highlighting the urgent need for regulation and safeguards against digital disinformation.
The blog was created as part of the “Tales from the Region” initiative led by Res Publica and Institute of Communication Studies, in cooperation with partners from Montenegro (PCNEN), Kosovo (Sbunker), Serbia (Autonomija), Bosnia and Herzegovina (Analiziraj.ba), and Albania (Exit), within the project "Use of facts-based journalism to raise awareness of and counteract disinformation in the North Macedonia media space (Use Facts)" with the support of the British Embassy in Skopje.