People, objects, landscapes – whatever the subject, a new algorithm from Stable Diffusion can create variations without copying the original.
Stability AI has added a new feature to its generative AI image model called Reimagine. For now, it’s just a new tool in the Clipdrop web toolbox, which Stability AI acquired earlier this month. Soon, the feature will be added to the open source model as well. With Reimagine, users can quickly create multiple variations of a single image.
According to the studio, there is no need for complex instructions. Instead, you upload the image you want to reimagine via the web interface using clipdrop, and then generate as many variations as you want. Unfortunately, it is not possible to provide the model with additional context via text, so the reimaging happens on autopilot.
As an example, Stability AI shows a bedroom: the top left image is the original, the other three are variations invented by Stable Diffusion. Something similar could be done for fashion looks or hairstyles.
Filters for inappropriate content
Stability AI emphasizes that the new images are only inspired by the original, not copied. However, this has its limitations and works better for some scenes than others.
A built-in filter is designed to block inappropriate requests, but can sometimes under- or over-regulate. “The model may also produce abnormal results or exhibit biased behavior at times,” the developers write in a blog post.
Re-imagined images are supposed to be new
Stability only briefly explains the underlying technique, there is no scientific paper for further explanation. Reimagine replaces Stable Diffusion’s original text encoder with an image encoder. Unlike known image-to-image algorithms, no pixels of the original are used, the company claims.
Images generated with Reimagine can be downloaded in a maximum resolution of 768×768 pixels, and Clipdrop’s paid membership (starting at $9/month) also provides access to an upscaler. Reimagine will soon be available as open source on Stability’s GitHub.