AI video startups Runway and Luma AI have released APIs for their video generation models within hours of each other. This allows companies and developers to integrate AI video technology into their own applications.
New York-based Runway has launched an API for its Gen-3 Alpha Turbo AI video model. Shortly after, San Francisco-based Luma AI followed with an API for its Dream Machine model.
Both APIs enable developers and businesses to incorporate AI video generation into their own apps and services. Runway says advertising group Omnicom is already using the API, though details of the specific application weren't provided.
Runway is rolling out its API gradually, initially only to select partners. Interested parties can join a waitlist. Pricing starts at one cent per credit, with five credits needed for a one-second video.
Luma AI's API is available immediately. The company charges $0.32 per million pixels generated, which works out to about $0.35 for a five-second 720p video.
Luma AI's Dream Machine API offers features like text-to-video generation, image-to-video conversion, and camera motion control.
Both companies stress the importance of responsible use of their technology. Luma AI says it uses a multi-level moderation system, combining AI filters with human oversight.
The release of these APIs intensifies competition in AI video generation. Adobe recently introduced its "enterprise-grade" Firefly Video AI model, but it's not yet available as an API. OpenAI also has a powerful video model called Sora, but has held it back so far.