fbpx

Thomas Gouritin: Without evaluation artificial intelligence will be more harmful than helpful

Interview

07.03.24

Прегледи

Artificial intelligence can be the most unbiased we can imagine, but in the end it is humans who, with their prejudices and "reflexes" for censorship, participate in the moderation process, says Thomas Guritin, an expert in conversational artificial intelligence and chatbots. In the eve of the international symposium on media literacy in the age of artificial intelligence, which is organized by the Institute for Communication Studies together with our partners, this interview is an introduction to one of the most discussed topics in the world at the moment, which Guritin will review as a participant of the event. He believes that investing in artificial intelligence is only part of the solution, as more needs to be done in terms of digital literacy in order to tackle online misinformation.

How can artificial intelligence be leveraged to detect and combat the spread of disinformation in digital platforms and social media?

Artificial intelligence shouldn’t be thought (and used) as the alpha and omega in addressing the disinformation problem. Of course it’s a great tool to help track down patterns and sources of disinformation, and to try to analyze a large amount of content in real time, but it won’t be enough.

We well know that debunking and fact checking, as crucial and useful as it is, is not as glamorous and viral as the disinformation itself. When you live in your conspiracy bubble, counter content doesn’t impact you very much, since the harm is already done.

What role can AI play in identifying deepfake content and preventing its harmful effects on public perception and trust?

AI can help find patterns and take down deepfakes, but once the content is out there on digital platforms and going viral it could already be too late to stop it. I’m really not sure that AI can help us prevent harmful effects on public perception and trust. Content regulation, effective moderation inside digital platforms, and AI literacy are, in my opinion, much more important than having the latest amazing AI model.

Can you provide examples of successful AI applications that have been effective in addressing disinformation campaigns or misleading content?

As of today, Large Language Models and all “GPT” like generative AI applications are more effective at creating more and more harmful content than at analyzing and preventing the spread of those contents. But researchers are working hard to get from analyzing large amounts of data after a disinfo campaign to predict the next harmful campaign.

The path isn’t an artificial one, it requires lots of great human engineering and real people. Good annotated data is one key to success and a lot of academics and institutions are working hard to find the right metrics to balance datasets. A recent example is the “Propagandist Pseudo-News” dataset by Géraud Faye et al. published very recently in “Exposing propaganda: an analysis of stylistic cues comparing human annotations and machine classification”.

Another interesting study has been conducted on conversations on social platforms during the pandemic to understand the topics discussed on social media in France and in China : “Concerns Discussed on Chinese and French Social Media During the COVID-19 Lockdown: Comparative Infodemiology Study Based on Topic Modeling”.

In the context of disinformation, how do you see the balance between AI-driven content moderation and potential concerns related to censorship or infringement on freedom of speech?

I think that it’s more about digital platforms’ response in terms of how fast and efficient the moderation could be on their side. Each country, or region, can have different concerns about censorship and different laws to tackle the issue, but in the end the social platforms must enforce regulations. AI can be the most unbiased and clean you can imagine, in the end real humans with their own biases and censor reflexes in the moderation process.

The best balance would be to be careful about the AI’s potential biases and harms and to be very clear on human moderation policies. Those policies can vary depending on cultural or political contexts, but it must be clear, written, and approved by all stakeholders at all times.

What challenges exist in using AI to combat disinformation, and how can these challenges be addressed to ensure accurate and unbiased outcomes?

AI is, for a few years, all about Large Language Models, and now Large “Multi-Modal” Models. In other terms : stochastic parrots (to paraphrase Emily M. Bender et al, 2021) guessing what would be the next word and the next word (etc…) until reaching a complete sentence.

Using this kind of large model is, by design, biased. It will analyze and/or generate new texts or visuals based on a training data set we don’t know anything about. It will reproduce the biases from this “pre training” data set in the way it will process new information. You’d want to create great unbiased AI algorithms to fight all kinds of disinformation at once, but you just can’t do that. To ensure accurate and unbiased outcomes, you need to master the training data set, make sure it represents the type of disinformation campaign you want to monitor and, maybe, stick to one kind of threat.

It’s important to narrow our expectations down to focus on real use cases and tackle them in detail with lots of real human experts working behind the scenes to ensure the quality of data fed to the AI, and to evaluate the outcomes. Human oversight is, in my opinion, the key to good uses of AI, in the disinformation field and in all sectors in general.

How might AI contribute to the rapid verification of information during breaking news situations to prevent the spread of false or misleading narratives?

AI might be very useful when trained on specific narratives and digital disinformation networks to detect weak signals before it’s too late. In our fast spreading information world, prevention is key. Even with regulation and laws, when it’s out there and going viral, it’s already too late to be countered effectively. I’m not sure we’ll be able to prevent the spread thanks to AI, but technology could help save precious time in the process.

In terms of user education, how can AI be employed to enhance media literacy and help individuals critically assess the information they encounter online?

I believe in digital and AI literacy and, above all, critical thinking and common sense to assess information and disinformation. We need to help individuals develop those skills by providing a new kind of education and best practices for everyone to be more engaged in their day to day (online and offline) media consumption.

Identifying disinformation and fighting it starts at the individual level, with keys to find out if a piece of news is legit or not. Disinformation often begins with the perception that “it could be real and I wanted to believe it, so I shared it”, it’s not an AI problem but an educational one.

What ethical considerations should be taken into account when developing and deploying AI solutions for disinformation detection and prevention?

As I said before, human oversight is unavoidable at each step of the development of this kind of tools. The AI Act is quite clear about that and that’s a good sign: human oversight should be mandatory and must become a mandatory best practice when developing and deploying disinformation detection and prevention tools.

In Propaganda (1928), Edward Bernays wrote “Now “public opinion” stood out as a force that must be managed, and not through clever guesswork but by experts trained to do that all-important job.” Do we want AI to be those new experts trained to manage public opinion?

How can collaboration between technology companies, governments, and research institutions contribute to a more comprehensive and effective approach to combating disinformation through AI?

The collaboration between every private and public organizations involved in combating disinformation, through AI and more old-school ways, is vital. Disinformation and conspiracy theory crooks are always one step ahead in terms of how to make people click on ads, the platforms must act, and governments should do more to enforce regulations about moderation and online disinformation.

As disinformation tactics evolve, how can AI technologies adapt and stay ahead to maintain their effectiveness in protecting online information ecosystems?

I don’t know if AI technologies are really ahead of disinformation tactics. By design, today’s large models are trained on past disinformation campaigns and won’t be a big help in discovering totally new ways to use social media platforms to spread disinformation.

However, using AI to turbocharge institutions, academics and private companies in the field to be more efficient in finding harmful networks, narratives and operations, is of course very important. But, once again, investing in AI is only part of the problem, we need to do more on digital and AI literacy to help individuals detect and fight online misinformation.

Journalist: Sonja Kramarska