Content
summary Summary

Researchers have introduced VideoRF, a new approach to streaming dynamic NeRF scenes on various devices, including mobile platforms.

Neural Radiation Fields (NeRFs) allow highly detailed 3D representations of scenes to be generated from a series of 2D images. Older methods have difficulty rendering moving scenes and are resource-intensive. However, there are now variants of NeRFs and related methods that are capable of displaying moving scenes and require significantly less time and computing power to train and render.

With VideoRF, a team from ShanghaiTech University, KU Leuven, and NeuDim demonstrates a method that enables real-time streaming of dynamic NeRFs to mobile devices such as smartphones or VR headsets.

VideoRF streams 360° video sequences of people in real-time

VideoRF converts a trained NeRF into a simpler 2D image sequence. This conversion is supported by special training to impose the temporal and spatial redundancy of the feature image stream. As a result, the images can be efficiently compressed by 2D video codecs and can take advantage of available hardware acceleration.

Ad
Ad

According to the researchers, the method enables real-time decoding and high-quality rendering on mobile devices at comparatively low and variable bit rates of around half a megabyte. The team also presents an interactive player for seamless online streaming and rendering of dynamic scenes.

In tests, the team shows that VideoRF can render immersive 360° video sequences of people. The ability to render dynamic neural radiation fields in real-time on mobile devices represents an important step in neural scene modeling, which is particularly important for VR/AR applications.

The team cites the complex multi-view capture systems required for the 360° video sequences as a limitation. In addition, long training times are still required, especially for longer sequences.

More information can be found on the VideoRF project page. The code should also be available there soon.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Researchers present VideoRF, a new approach to render dynamic NeRF (Neural Radiation Field) scenes on various devices, including mobile platforms such as smartphones or VR headsets.
  • VideoRF converts a learned NeRF into a simpler 2D image sequence that can be efficiently compressed, decoded, and rendered in real-time on mobile devices.
  • Despite successes in creating 360° video sequences, complex multi-view acquisition systems and long training times are still required for longer sequences.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.