Midjourney has released the first major update for V6 Alpha and for its new website with generation features.
The new improvements include faster upscaling, improved image quality, consistency and prompt following, as well as improved text rendering and performance at high stylize values. Vary (region) and Inpainting will follow in January.
The company plans to keep improving the alpha version of v6 in the coming weeks before it enters the official beta phase. By the end of the month, v6 is expected to become the new standard model.
In addition, the alpha version of the new Web UI has been extensively updated. It now includes filters and folders for sorting your generations.
Another useful feature is the ability to easily reference existing images in prompts by dragging and dropping them onto the text line. This also works for multiple images at once. You no longer need to copy and paste individual URLs.
No word yet on when the alpha site with image generation will go live for all users.
Midjourney plans to expand
Midjourney CEO David Holz promises more products for 2024. Video model training is set to begin this month, and Midjourney is also working on 3D generation. Holz' stated goal is to generate video game worlds in real-time using AI models.
According to Holz, "Midjourney isn’t a really fast artist, it’s more like a really slow game engine." His vision for the future is an AI model capable of generating volumetric 3D worlds at 60 frames per second.
Since its release, Midjourney V6 has been criticized because the model can generate images that are very similar or almost identical to the original images from the training set. If this happens intentionally, Midjourney wants to hold the user responsible.
DALL-E 3 from OpenAI can also violate copyrights by describing motifs instead of naming them, e.g. "animated sponge" instead of "SpongeBob". Depending on the DALL-E 3 implementation, this can be enough to bypass OpenAI's safety guidelines.