Meet the most innovative thinkers in broadcast, media and entertainment
From 5G outside broadcasters to markerless motion tracking technology
The worlds of broadcast, media and entertainment are continually being disrupted by new technologies and ideas. These advances are driving higher capture and display resolutions, greater information bandwidths, a larger array of devices on which content is consumed, and all-new ways of consuming it.
Access to advanced computing power, either locally or in the cloud, is making artificial intelligence and machine learning a useful tool across a wide range of disciplines. While innovative thinking and academic research is paving the way for things like glasses-free 3D viewing. Here are some of the companies already challenging the way the industry works...
Rebroadcasting and production - Telefónica TV5G
5G doesn’t just add another ‘G’ to the mobile telecoms network. Its low latency and high bandwidth mean that it’s a huge advance over 4G, and promises to disrupt a whole range of industries, including autonomous vehicles, ‘extended reality’ wearables, automated manufacturing and, of course, broadcast media.
- Read: How 5G will change media and entertainment now and in the future
- Discover more about IBC
- Register your interest for IBC2020
Telefónica, for example, has already announced its TV5G rebroadcasting and production solution, a 5G-enabled suite of hardware and software that combines Ericsson network equipment, Samsung Galaxy S10 5G smartphones and a 5G-enabled backpack from Idronia Multimedia Solutions.
“Nowadays, high level media coverage is something reserved only for large events or competitions due to the cost and complexity of the infrastructure that it requires,” said Juan Cambeiro, Head of the Telefónica España Innovation Project.
“This TV5G rebroadcasting and production solution allows you to have professional media coverage for even very local contents. In fact, we could talk about a democratisation of professional television coverage because it allows to broadcast very local content using less resources than ever before.”
AI screenplay analysis - ScriptBook
The benefits of artificial intelligence and machine learning are being felt in various aspects of movie production. At Digital Domain, for example, AI has been used for years to assist in the arduous task of performance capture and animating CG characters – such as Thanos from Avengers: Infinity War.
Elsewhere, ILM is using AI to de-noise renders, while researchers at Disney recently developed an algorithmic system that automatically generates animated, pre-viz storyboards from text-based scripts.
But several companies are now using AI technology to analyse movie projects. Antwerp-based ScriptBook leads the way in this area, using artificial intelligence to analyse screenplays.
By looking at a screenplay’s structure, content and cast, the ScriptBook system can produce a detailed report suggesting the projects’ target demographic, critical reception and, most importantly, its potential box office. And it does all this with an 86% success rate.
Similarly, LargoAI – headquartered in Lausanne, Switzerland – provides what it calls ‘data-assisted moviemaking’. Its predictive analytics can deliver an audience reaction to a script, finished movie or advertisement, giving the filmmakers time to make improvements or enabling producers to make an educated call on which movie rights to acquire.
Markerless motion tracking - Arraiy
Founded in Palo Alto in 2016, Arraiy’s team of computer scientists is using artificial intelligence and deep learning techniques to help broadcasters and VFX houses combine the real and the virtual more easily than ever before. Its latest solution, DeepTrack, automates the task of real-time camera and object tracking for augmented reality and mixed reality virtual production.
DeepTrack performs markerless tracking at sub-pixel accuracy, and later in 2019 will be joined by two other AI-powered technologies: DeepMatte and DeepDepth.
DeepMatte is designed for real-time matting without the need for a green screen, while DeepDepth performs z-depth extraction and occlusion solving, enabling an even greater degree of realism when combining live and virtual elements.
Arraiy is currently partnering with Pixotope to provide a complete real-time VFX solution, using Unreal Engine for live rendering.
Curved camera sensor - CEA-Leti
Sometimes it’s the simple ideas that matter most and so it shouldn’t come as a surprise to learn that French research institute CEA-Leti is developing a curved camera sensor. The Pixcurve is so shaped to mimic the curves of the human retina, keeping every pixel in focus and reducing optical aberrations.
In typical flat sensors, multiple lens elements help to flatten the image but make the camera bulky and heavy. A Pixcurve sensor, however, reduces the need for so many elements, reducing overall lens size by 60% while improving image quality. Cameras equipped with the technology could be smaller and lighter, easier to assemble and thus cheaper to manufacture.
Initially the Pixcurve sensor is targeted at typical digital cameras, mobile phones, action cameras and VR headsets, but it will no doubt find its way into high-end cameras for film and TV production.
Glasses-free 3D - Dimenco
When James Cameron’s Avatar ushered in a new era of 3D cinema, TV manufacturers were quick to jump on the 3D bandwagon. But its reliance on active shutter glasses to deliver its immersive visuals – with their dimmed view and flickering images – soon saw the public’s appetite for 3D fade.
As Cameron admits, 3D will never take off in the home while it remains tethered to glasses.
Auto-stereoscopic panels that use lenticular lenses have been around for years, but no one has really cracked the technology. This is largely due to issues with resolution (which is halved due to the need to show two views), the size and quality of the lens, and the limited viewing angle.
However, a small Dutch company called Dimenco has been demoing a system that seems to overcome these problems.
Dubbed ‘Simulated Reality’, it employs an 8K LCD display with lens array combined with eye-tracking and image processing to manage the viewer’s interaction with the on-screen content.
The combination of a 3D display with surround sound and haptic feedback is currently marketed as a ‘multisensory experience’ and it’s initially aimed at gaming and the medical and aerospace sectors.
But its ultimate aim must surely be the holy grail of a glasses-free 3D TV for the living room.
Music experiences in VR - Redpill VR
Redpill VR – a developer of interactive social music experiences – has entered into a strategic partnership with Sensorium Corporation, which creates digital simulations of real-world venues.
The duo intends to launch a social virtual reality (SVR) platform combining live music, extended reality experiences and gaming.
The first prototype debuted at the E3 show in Los Angeles in June, with a full launch scheduled for 2020.
The system employs the latest real-time rendering technologies, 5G and edge computing to recreate live virtual converts in VR – often combined with abstract imagery. This enables users to experience live events – be it music, art or gaming – while enjoying the social aspect of hanging around and mixing with other participants.
“SVR overcomes the main emotional drawback of modern technologies when using virtual reality: the isolation that users feel,” said Ingvar Goldman, CVO of Sensorium.
“Now everyone can fully dive into the VR world of music and gaming events and shows, experience new visuals and sounds that they might not see in a live performance, and meet and interact with other people, making friends in our virtual worlds through real-time communication – and they can do it all from inside the headset they’re wearing at home. SVR is a technology of the future that is here today.”
To learn how these technologies will impact the future of broadcast, media and entertainment, get up close at IBC2020. Register your interest (for free) here.