Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: Stability AI launches Brand Studio for brand-consistent image generation

Stability AI, once a major force in open-source AI with its Stable Diffusion image model, is shifting its focus to commercial products. The company's latest release is Brand Studio, a platform built for creative teams that need AI-generated visuals matching their brand identity.

At the core of Brand Studio is "Brand Central," where teams can train their own brand-specific image models and set up campaign templates. A "Producer Mode" turns text descriptions into step-by-step visual production plans and runs them automatically. The platform also includes Curated Model Routing, which picks the best-suited AI model for a given task, whether that's Stable Diffusion or a third-party model. Other additions include "Precision Inpainting" for making targeted edits to specific parts of an image. Brand Studio comes in a free Core version and a paid Enterprise plan.

Read full article about: One in four quotes in AI chatbot responses comes from journalism, Muckrack study finds

One in four quotes generated by AI systems comes from journalistic sources. That's the finding of PR database Muckrack, which evaluated 15 million quotes from AI responses across Gemini, Perplexity, Claude, and ChatGPT, as Press Gazette reports.

Trade publications and specialist journalists show up particularly often. Former Business Insider chief Henry Blodget is the most cited journalist worldwide. Reuters leads among publications globally, followed by Forbes. In the UK, The Guardian ranks first, followed by specialist magazine Homes and Gardens.

Rank Publication Subject area
1 reuters.com News
2 forbes.com Business
3 theguardian.com News
4 ft.com Business
5 cnbc.com Business

Muckrack sent millions of queries to all four AI services and tracked how often specific journalists and outlets appeared as linked sources. Based on the results, the company launched a new feature rating the "AI visibility" of journalists and publications across three tiers.

A separate analysis of Google's AI Overviews—AI-generated answers shown directly in search results—found that Facebook and Reddit are among the most cited sources across all queries.

Read full article about: Musk updates OpenAI lawsuit to redirect potential $150B in damages to the nonprofit foundation

Elon Musk has updated his lawsuit against OpenAI and Microsoft. He's now asking that any damages, potentially more than $150 billion, go not to him but to OpenAI's charitable foundation. He's also pushing for the removal of CEO Sam Altman from the foundation's board, according to the Wall Street Journal. Musk's lawyer, Marc Toberoff, said Musk "is not seeking a single dollar for himself."

Musk accuses OpenAI of abandoning its charitable mission and defrauding him as a donor by exploiting its nonprofit status. He wants Altman and OpenAI President Greg Brockman to turn over their shares and financial benefits to the foundation. The trial is set to begin in April in Oakland, California.

Musk argues OpenAI betrayed the mission he helped fund. However, early interview notes show he agreed to adding a for-profit unit in 2017 and actively discussed the transition while keeping the nonprofit in place.

OpenAI called the lawsuit on X "a harassment campaign driven by ego, jealousy and a desire to slow down a competitor." The company has also asked the attorneys general of Delaware and California to investigate Musk's behavior. OpenAI is currently valued at $852 billion and planning an IPO.

Read full article about: Microsoft's Bing team open-sources "Harrier" embedding model

Microsoft's Bing team (yes, really) has released "Harrier," an open-source embedding model. Harrier supports more than 100 languages, offers a 32,000-token context window, and was trained on over two billion examples plus synthetic data from GPT-5. According to the team, Harrier takes the top spot on the multilingual MTEB v2 benchmark and outperforms proprietary models from OpenAI and Amazon.

Rank (Borda) Model Zero-shot Active Params (B) Total Params (B) Embedding Dim Max Tokens
1 harrier-oss-v1-27b 78% 25.6 27.0 5376 131072
2 KaLM-Embedding-Gemma3-12B-2511 73% 10.8 11.8 3840 32768
3 llama-embed-nemotron-8b 99% 7.0 7.5 4096 32768
4 Qwen3-Embedding-8B 99% 6.9 7.6 4096 32768
5 gemini-embedding-001 99% 3072 2048
6 Qwen3-Embedding-4B 99% 3.6 4.0 2560 32768
7 Octen-Embedding-8B 99% 6.9 7.6 4096 32768
8 F2LLM-v2-14B 88% 13.2 14.0 5120 40960
9 F2LLM-v2-8B 88% 6.9 7.6 4096 40960
10 harrier-oss-v1-0.6b 78% 0.440 0.596 1024 32768

Alongside the full 27-billion-parameter model, the team released two smaller variants—0.6B and 270M—designed to run on less powerful hardware. All three models are available on Hugging Face under the MIT license. Going forward, the team plans to integrate the technology into Bing and into new grounding services for AI agents.

Embedding models handle the searching, retrieving, and organizing of information AI systems need for accurate answers. According to Microsoft, they're becoming increasingly critical as AI agents independently take on more complex, multi-step tasks.

Meta employees compete for token consumption on an internal AI leaderboard

At Meta, employees compete for titles like “Token Legend,” “Model Connoisseur,” and “Cache Wizard” on an internal leaderboard that ranks AI token consumption. But burning through more tokens doesn’t automatically mean getting more done.

OpenAI's safety brain drain finally gets an explanation and it's just Sam Altman's vibes

“My vibes don’t really fit.” In a new New Yorker profile based on over 100 interviews, Sam Altman explains why safety researchers keep leaving OpenAI and why shifting commitments others might call deception are just part of the job.

Sycophantic AI chatbots can break even ideal rational thinkers, researchers formally prove

A new study by researchers from MIT and the University of Washington shows that even perfectly rational users can be drawn into dangerous delusional spirals by flattering AI chatbots. Fact-checking bots and educated users don’t fully solve the problem.