AI research

Adobe's VideoGigaGAN turns low-res videos into high-res, detailed clips

Matthias Bastian
Researchers from Adobe and the University of Maryland built VideoGigaGAN, a new model for Video Super Resolution (VSR). It can take low-resolution video and turn it into higher-resolution video, adding fine detail while maintaining frame consistency.

Xu et al.

Researchers from Adobe and the University of Maryland built VideoGigaGAN, a new model for Video Super Resolution (VSR). It can take low-resolution video and turn it into higher-resolution video, adding fine detail while maintaining frame consistency.

Other ways to upscale video often use regression-based networks that make the results look blurry. VideoGigaGAN instead is based on GigaGAN, which is really good at upsampling images.

But the researchers found some problems using GigaGAN for VSR, such as flickering and aliasing between frames. To fix this, they added new parts to GigaGAN that make the frames more consistent and higher quality.

Video: Xu et al.

Tests show that VideoGigaGAN balances image consistency and detail better than previous methods, and that the model produces video with far more detail than the current best options. VideoGigaGAN can increase video resolution by a factor of 8 by adding more and better matching detail to the scene.

However, this also means that the video is to some extent AI-generated and no longer fully represents reality if that is a concern. The model also has some limitations for long videos because of errors that spread across frames, and for small things like text that get lost in the low-res input.

You can see many demos and comparisons with other methods on the VideoGigaGAN project website. It's not clear from the paper if and when Adobe will incorporate this model into its products. But it might, since it recently announced that it is adding more generative AI to its video suite.

Overall, VideoGigaGAN is a promising new way to create high-resolution video. It can add more detail than older methods without losing consistency between frames by using GAN technology. Like GigaGAN for images, VideoGigaGAN proves that GANs are far from obsolete.

Sources: