Runway introduces Gen-4.5, its latest video generation model. The company says the new release outperforms competing systems on select benchmarks, but it still struggles with core logic errors that affect current text-to-video models.
Runway has unveiled Runway Gen-4.5, describing it as more responsive to user instructions and more visually consistent than its predecessor.
The announcement leans heavily on results from the Artificial Analysis Text to Video benchmark. As of November 30, 2025, Gen-4.5 leads the ranking with an Elo score of 1247. That puts it slightly ahead of Google's Veo 3 at 1226 and Kling's Version 2.5 at 1225. OpenAI's listed "Sora 2 Pro" follows with 1206 points.
New model, old problems
Runway says Gen-4.5 models physical interactions more accurately than earlier versions. The system was built in close collaboration with Nvidia, and both training and inference run on Hopper and Blackwell GPUs.
Even with these upgrades, familiar weaknesses persist. Runway notes that Gen-4.5 still struggles with causality. Doors can open before a handle is pushed, for example. Object permanence remains a problem too, with items disappearing after being briefly obscured. The model also shows a strong "success bias," making actions succeed far more often than they should, even when they would realistically fail, like a poorly aimed shot.
According to Runway, these issues are especially important for developing reliable world models, an area they plan to keep improving.
Runway says Gen-4.5 will roll out to all users in the coming days. At the same time, Kling has released a new model of its own: Kling Video O1, which the company describes as a powerful multimodal video system.