DreamGenerator is an AI camera with integrated Stable Diffusion prompts
Key Points
- DreamGenerator is a camera that uses generative AI to transform captured images into new themes such as sky, medieval, underwater, or space, while retaining the essential elements of the original photo.
- Developer Kyle Goodrich aims to use the camera to simplify the complex prompting process in AI systems like Stable Diffusion to create unique images.
- DreamGenerator combines the open-source image AI Stable Diffusion and ControlNet, a fine-tuning method that improves image-to-image capabilities. Both AI systems are available as open source software.
AI image models can generate new images - or modify existing ones. DreamGenerator shows how this could work in a camera.
Unlike conventional cameras, DreamGenerator lets you choose the world in which the next photo will be taken: Heaven or hell, the Middle Ages, underwater or outer space, there are many variations. Thirty themes are pre-programmed.
Once a photo is captured, it is instantaneously transformed into a new image using the pre-selected theme. The fundamental characteristics of the photo are preserved, such as an individual's posture and facial features, or the perspective from which a car is photographed in a parking lot. The only alteration is that, following the capture of the photo, the car depicted becomes a new Ferrari, in place of an old Honda.

"The generated images reference the composition and pose of the original photo, ensuring that key elements are retained while also adding in new imaginative touches," writes developer Kyle Goodrich.
Introducing DreamGenerator! ?✨
A camera that transforms your photos into something new using the power of generative AI.
Choose from 30 prompts, capture, and watch as your image morphs into a one-of-a-kind masterpiece right before your eyes! ? pic.twitter.com/NJMxQ09Rna
- Kyle Goodrich (@_kylegoodrich) July 18, 2023
Of course, this has nothing to do with authentic photography. But Goodrich says he's primarily interested in simplifying the complex prompting process of systems like Stable Diffusion. You could do that with a smartphone app, obviously, but Goodrich says he prefers the simplicity of a point-and-shoot camera.
ControlNet allows Stable Diffusion fine-tuning with minimal data
For image generation, Goodrich uses a combination of the open-source Stable Diffusion image AI and ControlNet, a simple fine-tuning method that greatly enhances Stable Diffusion's image-to-image capabilities.
Here, fine-tuning for a given subject is done with tiny data sets, such as a single photograph. Like Stable Diffusion, ControlNet is available as free open-source software and even runs on smartphones.
In the video below, Goodrich, who is an AR product designer at Snapchat, shows off a prototype of the hardware and software. He doesn't mention a retail version, so those who want the AI camera will likely have to build it themselves (or build it as an app for your smartphone).
This is currently a prototype that leverages Stable Diffusion and ControlNet.
The generated images reference the composition and pose of the original photo, ensuring that key elements are retained while also adding in new imaginative touches. pic.twitter.com/DKdT6POXTc
- Kyle Goodrich (@_kylegoodrich) July 18, 2023
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now