Ad
Short

Google’s text embedding model "gemini-embedding-001" is now generally available via the Gemini API and Vertex AI. It costs $0.15 per one million input tokens.

The model supports over 100 languages, handles inputs of up to 2048 tokens, and uses Matryoshka Representation Learning (MRL) to reduce output size, which helps cut memory use and computing costs. Google says the model performs better than its earlier models and external alternatives in several tasks. Since its experimental launch in March, Google states the model has held a top position on the MTEB Multilingual Leaderboard.

Image: Google
Short

SpaceX, Elon Musk's aerospace company, is investing $2 billion in Elon Musk's AI lab, xAI. The funding is part of a larger $5 billion round, according to The Wall Street Journal. xAI's chatbot Grok is already used for customer support in SpaceX’s Starlink satellite internet service.

Musk stated on X that "it would be great" if Tesla also invested in xAI, but this requires approval from Tesla's board and shareholders. In March, Musk announced the merger of xAI with his social media company X. The merger allows the companies to share data, AI models, computing power, and staff.

Ad
Ad
Short

OpenAI alignment researcher Sean Grove believes the most valuable programmers of the future will be those who communicate best. "If you can communicate effectively, you can program," Grove says. In his view, software development has never been just about code but about structured communication: understanding requirements, defining goals, and making them clear to both people and machines.

Grove argues that code itself is only a "lossy projection" of the original intent and values. As AI models become more powerful, he says, the real skill will be turning that intent into precise specifications and prompts.

"Whoever writes the spec be it a PM, a lawmaker, an engineer, a marketer, is now the programmer," Grove explains.

Ad
Ad
Ad
Ad
Short

Mistral AI and All Hands AI have introduced two new models designed for AI-powered programming agents: Devstral Small 1.1 and Devstral Medium. Devstral Small 1.1 2507 is open source and can run locally on an RTX 4090 or a Mac with 32 GB of RAM. It achieved a 53.6% score on the SWE-Bench Verified benchmark and supports XML along with other formats.

Image: Mistral

Devstral Medium scored 61.6% on the same benchmark. According to Mistral, it offers more power and a lower price than Gemini 2.5 Pro and GPT-4.1. The model is available via API, supports fine-tuning, and will soon be integrated into Mistral Code.

Google News