OpenAI releases the API for DALL-E 2 in a beta version. This should lead to many new applications.
OpenAI makes the DALL-E 2 API available immediately. Only a few lines of code are required for integration. Developers can use the API to control three generation methods:
- Generate new images from text input
- Editing an existing image based on text input
- Generate variants of an existing image
Generations are supported in resolutions of 256×256, 512×512, and 1024×1024 pixels. Smaller images can be generated faster. As with the web version, OpenAI has enabled filters on the API that block questionable or legally sensitive content.
The API is currently still in beta and allows a maximum of ten generated images per minute and 25 images for five minutes. This is to ensure that all users can “comfortably develop prototypes,” the company writes.
Those who need an increase of this limit can contact OpenAI directly via chat. OpenAI intends to increase capacity and generation rates will be continuously increased along user feedback in the coming months. Documentation of the DALL-E 2 API is available behind the link.
Rapid growth: DALL-E 2 has three million users
With the announcement of the API, OpenAI also provides an update on DALL-E’s growth: more than three million people currently use DALL-E 2 and generate more than four million images per day. At the end of September, this number was still at 1.5 million users and more than two million images per day. This means a doubling of users in just one month.
If the DALL-E 2 API leads to similar results as those for GPT-3, a new app ecosystem around the image AI should emerge relatively quickly and ensure further growth.
OpenAI already mentions the first app examples: The fashion platform Cala uses DALL-E 2 for design suggestions for clothing, and the start-up Mixtiles sells framed pictures that users generate themselves based on questions about childhood memories and more. Microsoft has integrated DALL-E into its Designer app.
OpenAI also holds out the prospect of qualitative improvements: “As our research evolves, we will continue to bring the state of the art into the API, including advances in image quality, latency, scalability, and usability.”