November 28, 2021

Could AI really revolutionise the way we produce and manage content?

Never has there been such a need for video content to be so compelling and available on multiple devices and platforms instantly. With so much competition for viewing time in almost every media sector, producers are creating more elaborate content in a number of separate formats and resolutions.

In response to this, AI technology looks sets to completely shake up the way we produce content for entertainment, pointing to a revolution in cinema, TV, VR and other video content production.

By integrating AI frameworks with media management solutions, it is possible to extract additional value from content libraries, even those in massive archives. AI is able to automate the tagging of content with metadata, therefore making content libraries highly searchable with very little man-hours. Naturally, the ability to find relevant content at a quick pace means it is much easier to monetise content, both legacy and newly-ingested.

There are a number of examples where AI technology could prove significant during the production process:


AI technology has the potential to streamline the production of sports assets for distribution on various platforms. One example is the generation of sub-clips for use on social media. With an AI-based framework analysing every moment of ingested game footage, memorable moments could be identified and called up by a solution ready for distribution. If, for example, a user wanted to post a short clip of Zlatan Ibrahimovic’s two goals scored during the EFL final against Southampton in 2017, it would take only minutes to find the exact clips and push these to social platforms.

Ai sports soccer game


Marketers are creating video at an outstanding rate. It’s an integral component in most marketing strategies but 43% of marketers said they’d create even more video content if there were no obstacles like time, resources, or budget. AI technology could be the solution.

Consider if a user wished to create two versions of an advert, perhaps for separate regions where there are cultural differences, or pre and post the 9pm watershed. It’s much easier to find the right content and assemble that into an asset (this advert for example) if a user can set a parameter to disqualify from a search any items which contain content some viewers might find offensive.

In our own cloud-based media management platform, iconik, users can plugin any AI framework they choose to best fit their products or business. For example, they could choose to implement a custom-trained AI capable of recognising exact content or specific products, perhaps ‘Nike Air Zoom Pegasus 34iD’, rather than simply ‘white sneaker’.

iconik preview google video intelligence

Broadcasting and OTT

Automated distribution using AI can increase the value of content and remonetise libraries, too. In the broadcast space, if a certain actor or comedian suddenly becomes popular in a certain region, a broadcaster’s media management solution could use AI to identify content featuring this actor and add these to the distribution line-up for a television channel and so on. In a broadcast news setting, AI-frameworks make it possible to immediately identify and call up clips relating to a specific topic within breaking news. In the case of a natural disaster in India, for example, a broadcaster could search within a content library and use AI to perhaps identify legacy footage of what an area looked like before the disaster, or maybe the last time something similar occurred in the area. This has been possible in the past, but in this case every piece of footage would have been tagged manually, requiring hours of labour.

Broadcast archives are traditionally massive in scale, and the more content that exists, the more difficult it is to manually tag content. Automated workflows, such as AI tagging, are therefore more important than ever.

The future of AI


At the moment, it isn’t possible for AI technology to operate completely without human intervention. That’s because it simply isn’t accurate enough. Within iconik, we’ve made it possible for users to set rules so that if the AI framework returns a confidence rating below a certain level, that tag is discarded and anything above a certain level is automatically approved. Then, anything between these two levels is sent for human approval. The best thing about AI is that as the technology improves and the AI is able to learn from its mistakes, it will become more accurate over time.

AI is already beginning to move beyond content curation and automated content production is not far from the horizon. The next steps are automated scripting/storyboarding, as well as the discovery of the most suitable clips based on the purpose of the asset. So it could be possible for an AI framework to identify the 10 best scenes in a specific movie that should be used in the creation of a new trailer, based on all previous movie trailers within a certain genre.

Although the creation of content is not in itself automated yet, AI-based content discovery is the first step in making the content creation process as automated as possible.

Sentiment Analysis is also a big opportunity, whereby the AI technology could, for example, allow a content creator to search for all assets where President Trump appears to be happy, or sad. It would take a substantial amount of time for a news broadcaster to manually analyse and tag every piece of footage containing the president with a description of his emotion, but AI technology shortens the process considerably. This is already looking viable, as AI-based media management solutions can already determine whether a scene or piece of footage is sad, happy or funny.

On a larger scale, AI frameworks could also present different types of video assets depending on the mood of an entire user base, i.e. by determining if the broader audience is happy or sad. Although this is a new concept, studies have shown that it is possible for AI systems to set a standard ‘mood’ index for a certain region based on tweets, forums and other social media posts. This gives broadcasters, marketers and anyone else looking to engage with the public a better understanding of their target audience and the best way and time to communicate with them.

The Internet of Things (IOT) is another opportunity where it could be possible to serve content suggestions to users based on the emotional mood of a viewer. If a user has been playing sad songs on one connected device, another could suggest a tear-jerker movie. At the moment we’re not quite at this stage, but the technology is in continuous development so almost anything is possible.

Our CEO, Parham Azimi, recently offered his thoughts on the subject of AI for content production and management in this article by Adrian Pennington in The Broadcast Bridge. Read it in full here.

Cantemo Logo

Get started with Cantemo today!

Turn your content into something incredible.