fbpx

Yanis Kompatsiaris: Artificial intelligence can assist media professionals

Interview

07.03.24

Прегледи

There is growing concern among media professionals about how the increased automation of media workflows enabled by artificial intelligence could lead to closing of human jobs or can negatively affect creativity, says our interviewee Yanis Kompatsiaris, Director of the Institute of Information Technologies, Director of Research at CERTH-ITI and Head of the Laboratory for Multimedia Knowledge and Social Media Analysis. At the International Symposium organized by ICS and partners, he spoke on the topic that is making waves in the media around the world: "Artificial Intelligence in the Media Outlets: The Good, the Bad and the Ugly." The title of his speech is not chosen by accident because in this interview he also warns that, although the benefits of artificial intelligence are many and important for the media industry, they still bring their ethical challenges and risks.

How do you see artificial intelligence transforming the landscape of media and entertainment in the next five years?

AI technologies are expected to disrupt the media and entertainment industry and transform existing workflows by automating tedious processes, developing assistants to support media professionals, improving audience analysis and profiling, and developing advanced forecasting and decision support tools.

Generative AI in particular can greatly affect the media sector by revolutionizing content creation, enhancing visual effects, enabling personalized experiences, and fostering new forms of artistic expression. However, there are also challenges related to misinformation, privacy, and ethical considerations.

AI technology could help shape the media experience for users by enabling new ways of being informed, being entertained, being creative, interacting with content, communicating with other people all over the world, etc.

Can you provide examples of successful AI implementations in media production or content creation that you find particularly noteworthy?

The media & entertainment industry (news, film/TV, music, games, social media, advertisement, publishing etc.) is already benefitting from AI advancements that can significantly facilitate, enhance or transform important tasks across the media industry value chain, including but not limited to: automation of existing tedious workflows like exploration of large audiovisual archives and content search and retrieval; automatic content creation harnessing the power of generative AI; automatic content enhancement, e.g. to exploit the wealth of existing audiovisual content that was created in a pre-digital era ; personalisation of content and services via enhanced user profiling and improved content recommendations to improve user experience; accurate audience analysis for enhanced audience targeting, content/ services development and increased advertisement revenue, at the global but also at the local level; improved accessibility to content thanks, for example, to automatic language translation; accurate forecasting about different businesses aspects; and more efficient decision making in general.

In what ways can AI be leveraged to enhance personalized content recommendations for users in media streaming services?

More and more media companies investing large amounts of money to personalise their content and services and thus satisfy each customer’s unique preferences, experiences, needs and moods.

Personalisation encompasses content suggestion, content presentation, interaction with content or personalisation of content itself (e.g. personalised movie trailers). It also means providing content to users where they are and when they want it.

Elaborated profiling based on the continuous collection and analysis of user preferences, behaviours, and actions is already widely used in many media sectors (e.g. gaming industry, social media, advertisement, streaming services, etc.), however the trend is moving towards more elaborated approaches that also consider what happens to the user or in the world at the moment.

While personalization offers a large number of benefits, there is always the danger of over-personalisation, which reinforce filter bubbles, limiting exposure to diverse perspectives and fostering echo chambers. This inhibits critical thinking, exacerbates societal polarization and can lead to algorithmic biases, enhancing stereotypes and misinformation.

How can AI technologies contribute to the automation of routine tasks in media production, and what impact might this have on the industry as a whole?

Media workflows often include tedious or boring tasks, requiring a lot of resources. Some examples include searching large audio-visual archives or the Internet to locate information that can help a fact-checker verify the validity of some statement, analysing large volumes of documents for investigative journalism, producing subtitles or voice dubbing in different languages, producing content summaries, moderating content, organising A/B tests for different product parameters, clarifying complex IPR, etc. AI can help media professionals do their job more efficiently either by completely automating some tasks (e.g. content labelling or multi-lingual translation) or supporting professionals in more creative tasks (e.g. by offering automated suggestions, editing or enhancing content, answering questions, offering predictions about user engagement with content, etc.).

The use of AI can cut down operational costs and ultimately free up resources that can be directed to support work of better quality and increased creativity. At the same time, there is a growing concern among media professionals about how the increased automation of media workflows enabled by AI may lead to loss of human jobs or negatively affect creativity.

What ethical considerations should be taken into account when implementing AI in media, especially in content curation and recommendation systems?

While the benefits of AI for the media industry are many and important, they do not come without ethical challenges and risks. An important one is the risk posed to user privacy by the large-scale user monitoring and profiling mechanisms used by the media industry in order to offer increased personalisation and achieve better user targeting. Equally, disturbing are the phenomena of AI bias and discrimination against specific groups of people, including racial bias, gender bias, etc. For example, recommendation engines may discriminate against women when trained with film reviews that are mainly contributed by men while NLP models may introduce bias against underrepresented groups.

Another significant concern is that of lack of AI explainability, with AI systems currently being black boxes that are not able to explain how they reached a decision, e.g. recommending specific content or predicting an outcome. More transparency is required about how AI tools work in order for media professionals to trust them.

There is also a growing concern regarding manipulation of content and misinformation, making media organisations fear about the negative impacts of the growing amounts of misinformation to the public’s trust in the media but also to the freedom of expression.

And finally, there is a growing concern among media professionals about how the increased automation of media workflows enabled by AI may lead to loss of human jobs or negatively affect creativity.

With the increasing prevalence of deepfake technology, how can AI be used to detect and mitigate the spread of misinformation in the media?

To detect and mitigate the spread of online, dis-/mis-information, AI-powered support systems should be developed that enable: Multimodal and cross-platform analysis; Linguistic, country, culture, context and reputation analysis; Full synthetic content and synthetic manipulation analysis for text, image, video, audio; Automatic and early (real-time) detection of disinformation; Automatic detection of check-worthy items, claims or narratives; Interoperability with existing content authentication systems; Seamless and flexible human-AI collaboration workflows.

How do you envision AI shaping the future of storytelling and narrative creation in various media formats, such as movies, TV shows, and interactive experiences?

Here are some examples of how AI can change future of storytelling and narrative: Using AI to extend certain parts and subplots of a movie with materials that are more interesting for certain audiences, or creating personalised experiences based on the reactions of the audiences (as measured e.g. by wearable sensors or cameras) can lead to innovative ways of experiencing cinema and TV.

Delivery of interactive fiction content based on conversational agents, e.g. interactive stories for children or adults. The conversational and personalisation capabilities of chatbots create a closer user experience and foster engagement. This type of AI technology can provide new storytelling experiences and increase user engagement in sectors like advertising, marketing, film and audio.

AI has the potential to advance storytelling with virtual characters that are capable of advanced interactions with human beings. AI can augment virtual character design (facial expressions, body movements, voice), can identify human emotions and can enhance interaction between humans and virtual characters by making it more natural.

Generative AI is already used to automatically create new content by utilising existing content. Such technologies will be increasingly used by the media industry to create new high quality text, image, video and audio. The range of potential applications is without limits: deepfakes for the film industry, music composition, game asset design, script generation, etc. AI has already been used in the creative process of movie trailers and advertisements. Deepfakes are a hot topic in the film industry; the current initial use of this technology aims to replace actors, e.g. to depict a younger appearance of an aged actor in a film sequel. Extension to full scenes and settings can be expected in the next few years.

AI can be used to automate film/TV directing/editing/shooting processes. Through automatic selection of the best filmed scenes, virtual cinematography allows to create new scenes from the filmed ones while automated camera movement and tracking improve the shooting process.

AI may help to develop prosumer intelligence for the publishing sector, by providing algorithms and techniques for taking advantage of the untapped potential in the wealth of data generated by the users, such as fanfiction communities, and leveraging it for improving the co-creation processes.

What challenges do you foresee in implementing AI-driven solutions in media, and how would you address those challenges?

Some of the challenges already identified include: Integration of AI in existing business operations and processes and cost of developing AI solutions. This can be addressed by fostering collaborations between the media industry and AI providers and co-development of relevant technologies but also by the availability of ready-to-use tools or open-source tools in public AI repositories/marketplaces.

Lack of AI skills in media professionals: This can be addressed through personnel training to acquire AI skills and recruitment of additional personnel with such skills.

Ensuring compliance with relevant EU regulatory frameworks: An ethical framework for the use of AI should be established in each company while more guidance and more clear information is required by the EC and relevant institutions about the practical implementation of AI ethics.

Can you discuss the role of AI in improving audience engagement and interaction, particularly in live events or broadcasts?

AI can be used to automate engagement and two-way communication with users through NLP and sentiment analysis but also to improve content and services personalisation via AI-powered data management platforms. NLP technology can also be used for automatic real-time translation of content and enhanced communication experience, free of language barriers. Moreover, AI can support automatic analysis of video or textual content and thus facilitate tasks such as real-time monitoring of audiences and trend detection in social media. Chatbots can be used to dialogue with users/audience or for automated and personalised commentary for live programmes like sports events - the conversational and personalisation capabilities of chatbots create a closer user experience, enhance interaction with the user and foster engagement.

As AI becomes more integrated into media workflows, how can organizations ensure data privacy and security, especially when dealing with sensitive information or user data?

A plethora of new concerns regarding AI relate to the users’ rights to both privacy and transparency in ‘who’ they are interacting with, but also to how the introduction of AI produces new questions of accountability and liability. The following practices can be highlighted as mitigative measures against the concerns of privacy, transparency, accountability and liability regarding AI in the media sector:

Best practices of responsible data practices in the media sector. As the extensive use of data continues to grow, it will be vital that new best practices are developed to support responsible data strategies that protect the rights of the individual.

Best practices and policies regarding disclosure of AI systems for the media sector. As the question of who produced or curated an article is no longer limited to, for example, journalists, editors, and producers, it will be vital that new guidelines for how to disclose the utilisation of AI in these processes are developed to protect the individual’s right to transparency.

Explainable AI solutions that can help users understand how the AI system works and makes its decision. As users increasingly are partly serviced by AI systems in their media experience, it is important that they have access to understandable explanations of what the system does and on the basis of what data, to uphold their right to, for example, object to the way the decision was made.

New best practices should be developed to support responsible data within the media sector strategies that protect the rights of the individual. Best practices and policies regarding disclosure of AI systems for the media sector are also necessary.

 
Journalist: Sonja Kramarska
Photographer: Darko Andonovski