OpenAI, Oracle, and SoftBank are adding five new data centers to their Stargate AI infrastructure project in Texas, New Mexico, Ohio, the Midwest, and Milam County. This will bring the platform's planned capacity to nearly 7 gigawatts, with total investment exceeding $400 billion. The companies aim for 10 gigawatts of AI infrastructure by the end of 2025. Oracle is supplying Nvidia systems for the build-out, and SoftBank is involved through its energy subsidiary SB Energy. The new locations were chosen from over 300 proposals across 30 states and are expected to generate more than 25,000 direct jobs. More US sites are in the pipeline. OpenAI has also announced a partnership with Nvidia for up to $100 billion in additional computing power.
Suno has launched its latest music model, v5, for Pro and Premier subscribers. The company says v5 delivers more realistic vocals, improved sound quality, and greater creative control. Suno claims the new model outperforms all earlier versions and competing music models. In an ELO benchmark shared by Suno, v5 scored 1,293, beating v4.5+ (1,208) and v4 (992).
OpenAI has rolled out a new "Developer Mode" for ChatGPT, giving Plus and Pro users on the web full access to MCP (Model Context Protocol) tools, including both read and write functions.
The beta feature lets developers connect their own remote servers, manage tools, and use them directly in chats. It supports OAuth authentication, HTTP streaming, and Server-Sent Events (SSE). To activate it, go to "Settings → Connectors → Advanced Settings → Developer Mode." Once enabled, you can add connectors directly through the chat input field.
OpenAI warns that Developer Mode comes with serious risks, including prompt injection, unintended write operations, and potentially dangerous tool execution. If an MCP server is compromised, it could access or alter user data. Any write action requires separate confirmation to proceed.
"It's powerful but dangerous, and is intended for developers who understand how to safely configure and test connectors."
AI startup Thinking Machines wants to make large language models more predictable. The team is studying why large language models sometimes give different answers to the same question, even when temperature is set to 0, a setting that should always return the most probable answer.
Despite a temperature setting of 0, Deepseek 3.1 generates different answers to the same query. | Image: Thinking Machines
According to Thinking Machines, the problem isn't just GPU precision, which they say is "not entirely wrong" but "doesn’t reveal the full picture." Server load also affects how a model responds: when the system is under heavy load, the same model can produce slightly different results. To fix this, the team developed a custom inference method that keeps outputs consistent regardless of system load. More predictable behavior like this could make AI-supported research more reliable.