XR – advances in capturing, rendering, and delivering

Loading

XR – advances in capturing, rendering, and delivering

14 Sep 2024
Conference Room 2
Conference (Technical Papers)

In this extended session, four authors will each present their impressive research in the field of XR. Our first is an outstanding paper on the emerging volumetric video technology, Neural Radiance Fields (NeRFs), where it provides a thorough and detailed treatise of the state-of-the-art as well as comparative performance results. Our second paper presents results of an ultra-low bit-rate 3D conferencing system, built using pre-trained Neural Radiance Field (NeRF) models for high-fidelity 3D head reconstruction and real-time rendering. Whilst, our third paper seeks a universally acceptable standard for representing Volumetric Video. Built off glTF (Graphics Library Transmission Format), the authors present results of both effective file playback and its suitability for streaming. Our final paper rethinks capture – with a camera system that can be adaptively adjusted across its field, to simultaneously capture regions of high-dynamic range (with longer exposure) and high-motion (with higher frame-rate). A working proof-of-concept is presented and discussed in detail. This session is further supported by a BBC paper exploring how to incorporate a live music event into a virtual environment.

Speakers
Aljosa Smolic, Professor - Hochschule Luzern (HSLU)
Kodai Kikuchi, Research engineer - NHK (Japan Broadcasting Corporation)