Future Trends Theatre in the Future Zone
The IABM Future Trends Theatre in the Future Zone hosted a packed programme of presentations that explore up-and-coming technology and business trends and how they will segue from today's environment.
The IABM Future Trends Theatre is designed to give attendees an understanding of how new technologies can enable business plans now, rather than being dismissed as way-out ideas and possible hype waiting to find practical use cases.
With Augmented Reality Audio (AR Audio), we are beginning to tap into the potential to adjust the mix of your daily listening experience: reduce ambient noise, amplify and focus speech, adjust EQ to match your unique hearing profile, and interactively layer music with your voice assistant. This session presents an exciting view into how AR Audio and Artificial Intelligence will drive adaptive and precise noise cancellation, enable enhanced speech intelligibility, power look-forward capability to smooth variations in volume, allow focus on specific sound sources, and attach sounds to virtual objects.
VR video technology through stereoscopic 180° and 360° cameras presents new challenges for video editors, visual effects artists and colourists. Ross Shain, visual effects award-winner and software industry veteran, discusses the best techniques and strategies for solving VR post-production hurdles such as working with equirectangular footage, stabilization, removing cameras and working in stereo.
Broadcasters and content owners are striving to make programs more immersive and engaging. Advances in immersive technologies like AI and VR deliver powerful tools to grow viewership, giving viewers exciting and unique ways to enjoy content. This session explores how organizations can harness immersive technologies to captivate and retain viewers - from 3D graphics in the studio to delivering the stadium experience at home. It examines the use of AR and VR as storytelling tools, the technical challenges involved and how these will be overcome in the future, enabling media organizations to produce high-quality VR experiences for different applications.
This presentation will: discuss the software-only approach in comparison to other methods; describe an initial set of open-source microservices for common colourspace transformations, sizing, mixing and compositing; and provide initial measurements of the performance that can be achieved with this approach.
Now that OTT is disrupting television some startups are pushing the limits, by replacing the entire video delivery chain -- from rights management, to production, distribution, and monetization -- and creating their own ecosystems. It may be another twenty years before mass adoption, but if your app or service can be reduced to a protocol, watch out for the video blockchain competition!
Independent media technology consultant and non-executive director of Blue Lucy, Tracie will explore how using a rights management system which stores key metadata in the blockchain will provide the ability to track the rights and usage of any asset from origination to consumption.
Microservices are a software architecture style from which a complex set of applications are broken down into several independent and loosely coupled processes each often specializing in a single task. Independent processes communicate with each other using language-agnostic APIs. This new way of conceiving software applications is part of a deeper logic of outsourcing (and progressive dematerialization) of information systems.
Microservices have taken other industries by storm, and they are beginning to do so in the media business. To fully leverage the benefits of microservices, we need standardization, and organizations like SMPTE and EBU are taking up the mantle, and this exciting area is getting ready to take off.
ML and AI are emerging technologies that bring huge potential to broadcasters in areas of the workflow that are labour and time sensitive. This will ultimately help newsrooms focus resources on creating more compelling content by automating the creation of richer metadata.
Video libraries are growing at an unprecedented rate and becoming unmanageable. This session explores how the integration of AI and media management can help content producers prepare for a content-heavy future, monetize libraries and reduce the complexities of managing video. Learn more about the future of automated content production, such as the possibility for an AI framework to identify the ten best scenes in a movie to use in creation of a new trailer.
Red Bee Media will discuss the key challenges they still face when deploying Automatic Speech Recognition to subtitle pre-recorded and live content, whilst Speechmatics will explain why these challenges remain for speech recognition systems and how they could be addressed in the future. We discuss 3 key challenges: uncommon vocabulary, punctuation and the use of specific dialects.
Whilst AI has been the buzzword of many recent tech talks, a gap still exists between the technology itself and most media workflows. Content owners and creatives alike can harness AIs full potential to work faster and create even better stories. This session explores how AI can lower costs, reduce publishing time, create a better experience for both editors and end-users, and even make your content more human.
This technical presentation from Carl Petch of Telstra Broadcast Services will look at some of the reasons why broadcast providers are moving to virtualization, and what the benefits are of applying virtualization to broadcast workflows. Carl will provide some lessons from architecting media Virtual Network Functions.