The aim of this 2023 Accelerator was to present two different use cases based on synthetic humans: firstly, a theatre filled with the melodic tones of Maria Callas, and secondly, a photorealistic sign language interpreter, to address important aspects of accessibility in broadcasting. These two distinct use cases aimed to demonstrate how synthetic humans can be used to captivate audiences in visually stunning, emotionally moving, and inclusive ways.
Champions: RAI, EBU, ITV, VRT, YLE, BBC, Verizon Business, Kings College London, University of Southampton, unreal/epic games
Participants: Signly, Pluxbox, D&B Solutions, V-Nova, Hand Identity, 4DR Studios, Respeecher
Kickstart Day Pitch:
Both Proof of Concept workstreams used a variety of production techniques, such as multiple motion capture and synthetic human creation technology toolkits, exploring how synthetic humans can accurately and realistically replicate real human movements, facial expressions and voice, to create and publish more believable, lifelike characters for both posthumous and living humans. Specific areas of R&D included:
Recreating people as photorealistic 3D models from archive and from new photography
Real-time capture of body and face
Lip-sync technology optimised for speech and singing
Leverage a broadcaster’s archive content
Review cost-effective, sustainable tools
Output on multiple devices (VR headset, mobile…)
Interoperable Talent ID labeling for Provenance Verification & Automation
The final POC results showcased how synthetic humans can increase inclusivity and allow endless storytelling possibilities within emerging platforms, and be used to enhance traditional media, such as television programs, live on-air presenting, and broadcasting.
As part of the Final POC Results, the consortium also produced a technical paper PDF that captured the key learning points, specific industry calls to action, and suggested policy guidelines on AI and sustainable production in synthetic media and humans.