Ad
Short

Inception is alive and well. The AI start-up, whose workforce former CEO and Deepmind co-founder Mustafa Suleyman sold to Microsoft in 2024 so he could focus on writing lengthy blog posts about superintelligence, is back—with $50 million in fresh capital. The round was led by Menlo Ventures, with support from Microsoft’s M12, Nvidia, Databricks, and Snowflake.

Inception is now betting on diffusion models, or dLLMs, which don’t generate text word by word like autoregressive LLMs, but instead refine content step by step. Until now, this approach has mostly powered image generators. Inception wants to bring it to text and code. Google demoed its own take, Gemini Diffusion, in May 2025.

The company’s new model, Mercury, claims to generate over 1,000 tokens per second. Classic autoregressive models like GPT-5 usually top out at 40 to 60 tokens per second.

Mercury is available through partners like OpenRouter and Poe, with pricing set at $0.25 per million input tokens and $1 per million output tokens, giving it speed and cost advantages over standard LLMs.

Ad
Ad
Short

Entrepreneur Niels Hoven has released an alphabet book featuring nearly 1,000 illustrations made with AI. Hoven says producing this many images by hand would have been too complicated and expensive for human artists. With each illustration taking about two hours, he estimates the project would have cost around $50,000. The use of AI instead drew criticism on social media and in Amazon reviews.

A screenshot highlights the controversy over AI image generators. Recent UK court decisions say training AI models on existing works does not violate copyright. | via X

Hoven addressed the criticism, explaining that without AI, a hardcover edition would have cost about $200. Thanks to generative AI, the book is now available as a free PDF and as a $30 hardcover, with all proceeds going to Amazon for printing and shipping. Hoven says he earns nothing from sales. He maintains the book couldn't have been made without AI, and that its main purpose is to help children learn to read. Still, the project doubles as advertising for Hoven's company, which offers a related learning app.

Ad
Ad
Short

A developer at OpenAI known as "Roon" on X explains why large language models never behave exactly the same way twice. Roon says a model's "personality" can shift with every training run, even if the dataset doesn't change. That's because the training process depends on random elements like reinforcement learning, so each run makes different choices in what's called "model space." As a result, every training pass produces slightly different behavior. Roon adds that even within a single training run, it's nearly impossible to recreate the same personality.

Video: via X

OpenAI tries to keep these "personality drifts" in check, since users often get attached to a model's unique quirks. This was especially true with the earlier "sycophancy" version of GPT-4o, which some users still miss. Roon, however, wasn't a fan. He even publicly wished for that "insufficiently aligned" model's "death" before deleting the post.

Ad
Ad
Google News