Ad
Skip to content

Runway ML's new motion slider makes AI video animation more flexible

Image description
Runway

Key Points

  • Runway ML Gen-2, an advanced text and image-to-video template, has been enhanced with a still image animation feature that allows users to adjust the amount of movement on a scale of 1 to 10.
  • The Creative Partners Program offers select artists and creatives exclusive access to new Runway tools and models, unlimited plans, 1 million credits, and early access to new features.
  • Alternatives to Runway Gen-2 include the open-source Zeroscope and the relatively new Pika Labs, which is available for beta testing and, like Gen-2, supports text and image prompts.

Runway ML Gen-2, the most advanced text and image-to-video model available today, has a new animation feature.

One of the features of Gen-2 is still image animation, which automatically expands the image with matching content for a short video scene. With Runway's new Motion Slider, it is now possible to set the amount of motion on a scale from 1 to 10. 1 means almost no motion, 10 means strong motion.

Video: Twitter

Runway has also launched the Creative Partners Program. The program offers select groups of artists and creatives exclusive access to new Runway tools and models, unlimited plans, 1 million credits, early access to new features, and more.

Ad
DEC_D_Incontent-1

Runway Gen-2 gains scale and quality

Runway's Gen-2 model was launched in March and received a major quality update in early July. The model can create short videos from text, images, or mixed prompts that can now be up to 18 seconds long. At launch, only four seconds were possible. This shows the rapid progress that video AI has made recently, although the generated scenes are still far from perfect.

Runway Gen-2 is available on the web and as an iPhone app. If you would like to test the system, you can sign up and receive a few free credits each month. See a list of outstanding Runway Gen-2 generations here.

Possible alternatives are the open-source Zeroscope, which runs locally on standard graphics cards and processes text prompts into short videos, and the relatively new Pika Labs, which can be tested in beta on Discord and, like Gen-2, supports image prompts in addition to text.

Recently, one user showed how he single-handedly recreated the intro to Twin Peaks in Pixar style using Midjourney, Pika Labs' animation feature, and an editing program.

Ad
DEC_D_Incontent-2

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: Twitter