Content
summary Summary

Runway has introduced Gen-3 Alpha, a new AI model for video generation. According to Runway, it represents a "significant improvement" over its predecessor, Gen-2, in terms of detail, consistency, and motion representation.

Gen-3 Alpha has been trained on a mix of video and images and, like its predecessor, which was launched in November 2023, supports text-to-video, image-to-video, and text-to-image functions, as well as control modes such as Motion Brush, Advanced Camera Controls, and Director Mode. Additional tools are planned for the future to provide even greater control over structure, style, and motion.

Runway Gen-3 Alpha: First model in a series with new infrastructure

According to Runway, Gen-3 Alpha is the first in a series based on a new training infrastructure for large multimodal models. However, the startup does not reveal what specific changes the researchers have made.

A technical paper is missing, the only information available is a blog post with numerous unaltered video examples of ten seconds maximum, including the prompts used.

Ad
Ad

Video: RunwayML

Video: Runway

Video: Runway

The company highlights the model's ability to generate human characters with different actions, gestures, and emotions. Gen-3 Alpha also demonstrates advances in the temporal control of elements and transitions in scenes.

"Training Gen-3 Alpha was a collaborative effort from a cross-disciplinary team of research scientists, engineers, and artists," emphasizes RunwayML. It was designed to interpret a wide range of styles and film concepts.

Recommendation

Video: Runway

Video: Runway

Video: Runway

Customized models for industry customers

In addition to the standard version, Runway says it is working with entertainment and media companies on customized versions of Gen-3. These are designed to provide greater stylistic control, more consistent characters, and to meet specific requirements. Interested companies can submit an inquiry using this contact form.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

In addition to Gen-3 Alpha, Runway is announcing new security features such as an improved moderation system and support for the C2PA standard, which is used by all major commercial image models. The company also sees the model as a step toward general world models and a new generation of AI-powered video generation.

Runway's race to catch up with Sora

In February 2024, ChatGPT developer OpenAI presented its Sora video model, which marked a new milestone in terms of consistency and image quality. However, the software is still not freely available and is probably far from being commercialized. Since then, several competing companies have introduced similar technologies, most notably KLING and Vidu from China.

RunwayML, which has been a pioneer in this field for several years, seems to have caught up with Gen-3 Alpha. According to the company, Gen-3 Alpha will be available in the next few days.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Runway has introduced Gen-3 Alpha, a new AI model that offers significant improvements in detail, consistency, and motion representation in the generated videos compared to its predecessor, Gen-2.
  • Gen-3 Alpha is based on a new training infrastructure for large multimodal models and has been trained on a mixture of video and images. It supports text-to-video, image-to-video, and text-to-image functions, as well as various control modes.
  • In addition to the standard version, Runway is developing custom versions of Gen-3 for entertainment and media companies. The company sees the model as a step toward general world models and a new generation of AI-powered video creation. A release is planned for the next few days.
Sources
Jonathan works as a technology journalist who focuses primarily on how easily AI can already be used today and how it can support daily life.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.