Future Trends Theatre in the Future Zone
The IABM Future Trends Theatre in the Future Zone hosted a packed programme of presentations that explore up-and-coming technology and business trends and how they will segue from today's environment.
The IABM Future Trends Theatre is designed to give attendees an understanding of how new technologies can enable business plans now, rather than being dismissed as way-out ideas and possible hype waiting to find practical use cases.
More and more TV shows and movies are translated into dubbed and subtitled versions, requiring translators and voice artists to access highly sensitive pre-release content. To protect the video during dubbing, rotoscoping is typically used, adding time and cost that is prohibitive for tightly budgeted productions. This presentation discusses use of off-the-shelf and custom AI to protect the video using various machine-vision techniques to automatically identify when someone is speaking on-screen and to rotoscope them to minimize exposure of the full scene, effectively rendering piracy pointless.
In the media industry, machine learning and deep learning-based solutions can be exploited for automatic verification of compliance requirements on content, such as detection of nudity, adult content, violence, prohibited objects, substance abuse and strong language. Integrating these systems with traditional QC systems is important, as well as building well-annotated quality datasets for the specific requirements of content compliance. This presentation discusses how deploying systems that meet these requirements can improve content compliance processes, providing significant operational cost savings and reliable and repeatable reporting of compliance issues.
Deep Learning is increasingly used in production, from filtering illegal content to driving cars/drones. This session shows how those systems can be easily tricked and attacked and discusses ways to counter malicious use. Humans can be tricked too because over time, generated content becomes increasingly indistinguishable from reality. Also covered is the use of Deep Learning to detect fake content and the current advances of content generation through Deep Learning for both fake and original content creation.
With the explosion of award-winning content from streaming services, viewers are inundated with what to watch next. Only 10% watch most or all the content recommended by their streaming service, and 44% say recommendations are rarely or never what they want to watch. Content recommendations arent meeting consumer needs. This session explores whats next for content recommendations, harnessing the power of emerging technologies like AI to deliver highly personalized suggestions that satisfy viewers and drive lift for advertisers.
A new breed of AI-based platforms for media analysis, when paired with leading-edge MAM tools, offers great potential for transforming media workflows, such as unlocking the potential of speech to text and automatic language translation, and tagging content automatically based on attributes such as people, places, things, and even sentiment. This session presents a new vision for MAM driven by AI-based speech recognition, video analysis and image analysis capabilities, making it easier than ever for operations to access, manage, and archive tremendous volumes of content.
AI is an area thats ripe for innovation. This session will discuss new AI applications that are showing tremendous potential and are now being explored. It will examine the dilemma of how best to harness the power of AI in a way that gives tangible business benefits as technology progresses and implantation grows, and how to make the user experience elegant in light of growing dimensions of data.
This session will first give an overview of the challenges that broadcasters face, the benefits of transforming to All-IP and a high-level overview of what exactly is meant by All-IP. It then will dive deeper into the technologies themselves, starting with a discussion of the Joint Task Force on Network Media (JT-NM) roadmap and the data essence and control planes of interoperability, followed by an overview of the major standards behind All-IP including the new SMPTE ST 2110 suite of standards for Professional Media over IP. The current state of the industry will be reviewed and what is to be done next.
A cloud-based dashboard can be used to orchestrate and spin up an entire event, including the instances of cloud used for live processing. Baskar explains this new way of delivering live that will bring unmatched flexibility and greater control over operational costs.
Isak will show how end-users can have full control over every step of the media supply chain, while still maintaining an overview of the total cost. It is Isak's belief that this way, users can take the step into using the cloud to get all its benefits: unlimited scaling, rapid prototyping, improved manageability, less maintenance, and access to new tools and algorithms. The way to achieve this is with APIs. The concept of APIs dates back more than 50 years, but in this presentation we will point out some specific points that APIs should have to facilitate cloud-to-cloud-to-on-premise connectivity and media exchange. If time permits, real-time demos will be performed.
In this session, Aperis Joop Janssen details how the deployment of cloud-based software can change how we approach the live production of the worlds biggest sporting and entertainment events. He explains how a native IP-focused approach benefits multiple stakeholders including facilities operators in the field, rights holders transporting live event feeds to broadcasters and service operators delivering content to audiences. With software-driven cloud functionality, live programming can be produced, transported and delivered using a much more attractive pay-as-you-go production model.
WGBH Boston, the largest producer of PBS content for TV and the Web, selected a hybrid cloud archive. By using object storage, WGBH's large amounts of metadata are quickly searched providing a detailed description of the content in every storage object, allowing production staff to quickly locate content through automated searches, and to move media from one tier to another.
Cloud discussions typically revolve around technology benefits - more storage, more processing power, global accessibility. Combined with advancements in IT, telco and AI, traditional workflows and business landscapes are dramatically impacted. Muriel le Bellac, CEO of Videomenthe, examines approaches to successfully integrating emerging technologies whilst enabling the humans to continue to do what they do best.
Video On Demand and OTT are the fastest and most profitable growth areas in media distribution today. We will show you how to deploy a future-proof, software defined, singular automated workflow system, across all your broadcast playout platforms, to deliver the media assets you need for tomorrows broadcast world and save you money, time and resources right now.
Dr. Cross discusses the forward looking benefits of computer technology to producing video including: economics; leveraging existing IP infrastructure; shared live media sources; flexibility of virtualization; and the television station as a data center.
This presentation will explore the various factors currently holding back broadcasters and operators from distributing more UHD channels and content and what steps can be taken to make 4K distribution more cost effective, and therefore viable, from a business standpoint.
Video consumption on second screens (smartphones, tablets, PCs) is increasing dramatically. To stay competitive, satellite operators need an affordable way to deliver high-quality OTT services including live VOD, catch-up, and start-over TV, to connected devices. This session presents case studies about transitioning to a full OTT distribution workflow, delivering the same video quality as broadcast, with zero latency or buffering.
Ian will demonstrate how non-linear editing platforms can be replaced for workflows distributing content to social media, online and broadcast channels without the need for desktop tools. Cloud-based platforms allow the focus to return to content creation, rather than concerns of technology restrictions.
Media content begins with acquisition from a single source, yet typically requires multiple, loosely-coordinated ingests to create the final presentation. Dan Montgomery, President of Imagine Products, Inc., explores how emerging AI and Cloud technologies can reduce human errors, identify equipment failures, collect useful metrics and metadata, track assets from origin to end uses, and link acquisitions to archived assets, yielding optimized workflows.