<Group>

Hero Subpage - Technical Papers

No content here.

3 - 6 December 2021

Technical Papers

Technical Papers

Technical Papers Process

Technical Papers present original, novel research on solutions to real-world problems faced by the international broadcast and digital media industry, and are non-commercial (Tech Paper Guidelines). They can be about any relevant industry topic, but we encourage submissions on the topics that are of most current interest to the IBC audience.

The first step is submitting a 300-word synopsis. All submissions are peer reviewed by a committee of independent industry experts. Successful authors are then invited to submit a full Technical Paper for further peer review, with successful papers presented at the IBC Conference.

Technical Paper entries for 2021 are now closed

Facial Recognition - Various facets of a powerful media tool

Facial recognition is one of today's most controversial media technologies. Recent claims are that it can reliably recognise subjects wearing sunglasses or medical masks, and that it can even differentiate between identical twins.

Not all of its many possibilities are sinister, however. In this session we shall see how facial processing is already employed by a global broadcaster and that it is demonstrating its value in several areas of live broadcast production, such as assisting a commentator to recognise each of 500 runners in a road race. In a second presentation we shall discover how a convolutional neural network has been trained to classify facial expressions, allowing it to recognise and tag emotion in video material. This provides an entirely new way of categorising and searching drama and movie content.

Orchestrated Devices - A vision of networked home entertainment

The number of devices in the modern home which are capable of reproducing media content is already considerable. However, a user will typically employ only one when enjoying a particular form of entertainment. Suppose that all the devices in the home were connected and synchronised through an internet-of-things-type network; it would then be possible for the home to come alive, as the same entertainment could engage with multiple: screens, speakers, mobile phones, shaking sofas - even the domestic lighting and smart appliances. Of course, the concept of object-orientated media brings this orchestration idea closer to reality - as we shall see!

In this session, we shall hear how a prototype audio orchestration tool has been designed and trialled on several productions to evaluate the principle of creative orchestration; with extremely positive results. In a second presentation, intelligence within the home automatically orchestrates the incoming media across the available devices taking account of the content, the environment and the wishes of the user. Trials of this architecture will also be seen to exceeded the expectations of users.

Our supporting paper investigates the related challenge of orchestrating the desires of multiple users sharing devices.

Cutting Edge Technologies - A preview of some experimental concepts

In this session we shall examine three new ideas which are thought -provoking and potentially media game-changers. First, we introduce the robot companion who watches TV with your family and friends, and shares the fun and the emotional engagement. Implemented as an autonomous, free-standing character, the robot attends to the screen and enjoys discussing the content. In a trial 70% of participants thought that the robot promoted conversation and a relaxed atmosphere.

Our next on-going experimental development concerns a live performance where musicians in a concert hall will play together with itinerant musicians who are walking through the streets of the city. We chart the considerable technical difficulties involved in conveying several low-latency audio and 4k video signals through 5G networks, together with the associated production and artistic challenges. Several key latency problems remain but the date is set for an ambitious concert in the autumn of this year!

The third initiative is a novel means by which an individual's entire media streaming history, from all their service providers, can be used to provide personal recommendations without any contributing service needing to hold or process their data. The architecture is based upon a Solid data ecosystem concept which permits local storage of personal data and an exchange format which preserves confidentiality while allowing service suppliers' business models to operate. The Solid repositories or pods used, offer open solutions and conform to W3C standards. Early demonstrator trials have provided an understanding of the benefits, challenges and risks for all parties involved, and have provided great optimism that this ecosystem could become fundamental across many national services involving personal data.

AI in Media Production - Creating new markets for linear content

For decades, broadcasters have been producing linear programmes, such as news, magazines or documentaries, which contain valuable audio-visual information about a vast variety of individual topics. The problem is that these individual topics are often neither addressable nor findable. Could AI and machine learning, segment or chapterise this archived material so it would be re-usable in the interactive digital world? Might AI even be able to re-edit it into personalised media? We look at a fascinating project which is doing all this and more. Improvement is still required, especially in the editorial challenges for AI, of creating re-compiled media, but public-facing trials are underway and generating much interest.

Also key to the re-use of these re-purposed assets is the recognition of the diversity of today's video delivery platforms, in particular social media. AI can be used to cleverly target particular content offerings across platforms according to: predictions of audience interests, trending stories, particular localities, anniversaries, etc - all achieved through news scanning and on-line trend monitoring. Content can also be automatically adapted to suit the style and culture of each platform. Join us to hear from an ambitious European project which is seeking to optimally craft and distribute video across a diversity of channels.

Advances in Audio - Using some remarkable signal processing

Every broadcaster knows that the most common complaint from viewers is that programme dialog is hard to discern against a background of atmospheric sounds, mood music and competing voices. It is especially a problem of age, where 90% of people over 60 years old, report problems. Research over decades has unsuccessfully sought a way of enhancing the intelligibility of TV dialog - until now! We present the results of trials by a collaboration of researchers using their deep-neural-network-based technology across a wide range of TV content and age groups. These show a startling performance; join us to judge the benefits for yourself!

We shall also hear about exciting research using cloud-based AI and 5G connectivity to deliver live immersive experiences to a variety of consumer devices. Key to the experience is the ability of viewers to change their content viewpoint, with live rendering taking place in the cloud. The presentation focuses on the audio which is object-based and AI-driven, and carries with it the metadata necessary for personalised rendering of the scene. The capture of the background is also critical to the recreation of the audio scene, for this the team chose second-order ambisonics accompanied by Serialised Audio Definition Model descriptive metadata. The presentation will explore detailed aspects of the audio processing and production. Altogether, a fascinating glimpse of the technology required to convey 360° audio for free-viewpoint XR!

The Cloud - For Live and Production Workflows

Cloud-based production is revolutionising the working practice of journalists and entertainment media producers, allowing increased flexibility of location and opportunities for innovation and speed of creation. In this session we explore two advances which contribute to the efficiency of the workflow. First we look at the development of an app-based news service where journalists not only create the stories but drive the whole content creation and delivery processes. The design must therefore focus on ease of use with intuitive interfaces. The app offers a range of news, politics, lifestyle and entertainment services and is designed around the requirements of the daily commuter. Join us to discover the challenges faced by the designers of the operational workflows and see the whole service in action.

Our second cloud presentation delves deeper into the technology involved in developing a hybrid premises or cloud-based system for live and production workflows. It focuses on the JT-NM reference architecture and it will be seen how this allows media to be input from anywhere and stored either locally or in the cloud, and staff can work in any place which has a reasonable internet connection. We shall see how this advanced and novel implementation works scalably, globally and securely.

This session also recognises an additional paper on cloud-based multi-station architecture for networked radio stations, which forms part of the conference proceedings but could not be presented personally for scheduling reasons. This paper is highly regarded and is recommended reading.

ST2110 on Modern IT Infrastructure – How Difficult is it?

The goal of leveraging modern high-performance IT fabric for professional media handling is laudable. But just how difficult it is? And how close to commercially off the shelf (CoTS) IT equipment are we? In this session, you will learn from an expert who lays bare their practical experience of the complexities and challenges of implementing ST2110 and asks whether this is the right solution to achieve the goal. In our second paper the author demonstrates a state-of-the-art GPU/DPU in a Microsoft Windows device outputting ST2110 to a networked attached display.

More Formats - More Conversions

Whilst the enhanced video formats of Ultra High Definition (UHD), Wide Colour Gamut (WCG) and High Dynamic Range (HDR) present in spectacular quality – they also bring a myriad of format conversion challenges, whether that is up-converting legacy content for use in new productions or down-converting HDR/WCG to suit traditional devices. In this session, we address both of these challenges – one demonstrating an effective colour mapping model that takes into account the behaviour of the human visual system and the other using machine-learning for super-resolution in a production environment. Our supporting paper continues the enhancement theme assessing the effectiveness of machine learning to reduce coding artefacts.

Optimising Streaming – Savings at Scale

Streaming is ubiquitous and both bandwidth and storage hungry. In this session we focus on improvements to both of these challenges, a must for cost saving at scale. Our first paper investigates, tests and provides an open-source solution that optimises viewer experience when adaptively streaming context aware encoded video, a particular challenge due to its bursty bit profile. While our second paper addresses the storage challenge of supporting both HLS and DASH low-latency streaming, with clearly illustrated examples and experimental results. In our supporting papers we continue the optimisation theme as well as a seeking to address the “watch-together” synchronisation challenge.

Advances in Video Coding

In this session we showcase significant coding gains arising from both traditional and Artificial Intelligence based techniques. Quantization is at the heart of compression and in this masterclass paper, you will learn how practical advances in rate-distortion optimisation continue to drive encoder gains, across codecs. Then there is the rise of Artificial Intelligence which is playing an increasingly important role in video compression. Here you will hear how machine learning has been successfully used to identify “salient” areas of a picture for concentrated bit allocation, but is it robust and predictable? Our supporting papers continue the Artificial Intelligence theme addressing, where in video compression it is (and is not) best applied and how it might be standardised.

XR Producing Immersive Experiences

For XR to come of age, we need practical capture systems, familiar production processes and an effective blending of game and multimedia technologies. In this session, we address all three of these challenges: showcasing a production environment with 6 degrees of freedom (6 DoF) video and spatial audio capture composited using a game engine, a practical real-time (30fps) point cloud capture system – as well as discussing the current state of convergence between game and traditional multimedia in XR systems.

Contact Us

If you have any questions regarding the Technical Papers then please do not hesitate to contact a member of the IBC Team on the details below:

Email: technicalpapers@ibc.org

Telephone: +44(0)207 832 4100

Exhibitors include: