Google’s new graphics AI renders photorealistic 3D scenes from 360-degree footage and allows NeRFs to be used in everyday settings.
Neural Radiance Fields (NeRFs) are artificial neural networks that can learn a volumetric representation of an object from a series of images and then also render that object from new viewpoints. For photorealistic renderings, NeRFs rely on a technique similar to ray tracing.
The technology has made great strides recently: it creates stunning scenes, renders faster and faster, and can be trained more quickly. Researchers at Nvidia therefore increasingly see artificial intelligence as a real alternative to traditional rendering methods.
NeRFs have problems with complex scenes
But the most impressive results to date have come from photo models that can be rendered without a background, or real objects shown in a confined space from a relatively stable viewing angle.
For “unlimited” scenes, where the camera can point in any direction and the content can be at any distance – i.e., full 3D scenes comparable to a computer game – NeRFs have so far failed.
In these cases, the AI systems produce blurry and low-resolution renderings and exhibit image artifacts because they do not correctly learn the scaling of nearby and distant objects.
Therefore, for rendering real-world objects that are in front of natural backgrounds such as a garden, NeRFs are only suitable if the camera does not move around the object but points steadily in the same direction.
Mip-NeRF 360 renders impressive 360-degree images
Google researchers are now demonstrating mip-NeRF 360, an AI system that enables 360-degree renderings of real-world objects with complex backgrounds. To achieve this, the researchers improved the training method, such as applying a coarse-to-fine strategy that uses coarse and fine ray intervals to sample the scene.
The result is photo-realistic 360-degree images in which even finer details such as grass or leaves in the background are still visible. Mip-NeRF 360 also generates detailed depth information for each scene.
The system’s limitations are with very fine details, according to Google. The quality also decreases when the camera is too far away from the center of the scene, and the training time is several hours, as with many other NeRF architectures.
More information is available on the mip-NeRF 360 project page.