Runway ML Gen-2, the most advanced text and image-to-video model available today, has a new animation feature.
One of the features of Gen-2 is still image animation, which automatically expands the image with matching content for a short video scene. With Runway's new Motion Slider, it is now possible to set the amount of motion on a scale from 1 to 10. 1 means almost no motion, 10 means strong motion.
Runway has also launched the Creative Partners Program. The program offers select groups of artists and creatives exclusive access to new Runway tools and models, unlimited plans, 1 million credits, early access to new features, and more.
Runway Gen-2 gains scale and quality
Runway's Gen-2 model was launched in March and received a major quality update in early July. The model can create short videos from text, images, or mixed prompts that can now be up to 18 seconds long. At launch, only four seconds were possible. This shows the rapid progress that video AI has made recently, although the generated scenes are still far from perfect.
Runway Gen-2 is available on the web and as an iPhone app. If you would like to test the system, you can sign up and receive a few free credits each month. See a list of outstanding Runway Gen-2 generations here.
Possible alternatives are the open-source Zeroscope, which runs locally on standard graphics cards and processes text prompts into short videos, and the relatively new Pika Labs, which can be tested in beta on Discord and, like Gen-2, supports image prompts in addition to text.
Recently, one user showed how he single-handedly recreated the intro to Twin Peaks in Pixar style using Midjourney, Pika Labs' animation feature, and an editing program.