Ad
Short

OpenAI has rolled out a new "Developer Mode" for ChatGPT, giving Plus and Pro users on the web full access to MCP (Model Context Protocol) tools, including both read and write functions.

The beta feature lets developers connect their own remote servers, manage tools, and use them directly in chats. It supports OAuth authentication, HTTP streaming, and Server-Sent Events (SSE). To activate it, go to "Settings → Connectors → Advanced Settings → Developer Mode." Once enabled, you can add connectors directly through the chat input field.

OpenAI warns that Developer Mode comes with serious risks, including prompt injection, unintended write operations, and potentially dangerous tool execution. If an MCP server is compromised, it could access or alter user data. Any write action requires separate confirmation to proceed.

"It's powerful but dangerous, and is intended for developers who understand how to safely configure and test connectors."

OpenAI

Short

AI startup Thinking Machines wants to make large language models more predictable. The team is studying why large language models sometimes give different answers to the same question, even when temperature is set to 0, a setting that should always return the most probable answer.

Despite a temperature setting of 0, Deepseek 3.1 generates different answers to the same query. | Image: Thinking Machines

According to Thinking Machines, the problem isn't just GPU precision, which they say is "not entirely wrong" but "doesn’t reveal the full picture." Server load also affects how a model responds: when the system is under heavy load, the same model can produce slightly different results. To fix this, the team developed a custom inference method that keeps outputs consistent regardless of system load. More predictable behavior like this could make AI-supported research more reliable.

Short

Adobe is bringing Google's new image AI, known as "Nano Banana" (officially Gemini 2.5 Flash Image), to Photoshop as an optional tool. The model is designed for editing existing images with a high level of consistency and reliability. Adobe's first demo video shows how Nano Banana works with the "Generative Fill" feature to expand or modify image content. The model is expected to roll out in September.

Video: Adobe

While Adobe's own Firefly image models support similar features, they don't reach the same level of quality. And if anyone from Adobe is reading: as a publisher, I'm no stranger to losing revenue to Big Tech. Feel free to reach out if you have questions.

Ad
Ad
Short

Microsoft has added a new audio mode to Copilot, powered by its MAI-Voice-1 model. Users can choose from three modes: Emotive Mode for expressive, free-form delivery; Story Mode for storytelling with multiple voices; and Scripted Mode for exact, word-for-word playback. The tool features a wide range of voices and styles, from Shakespearean performances to sports commentary, and is available in Copilot Labs.

Video: Microsoft

Microsoft recently introduced MAI-1 as its first major language model and signed a deal with Anthropic to integrate its models into Office. Both moves signal that Microsoft is aiming for more independence from OpenAI.

Ad
Ad
Ad
Ad
Short

Google has added new reporting tools to NotebookLM. Users can now generate structured reports in more than 80 languages and adjust the tone, style, and structure as needed.

Watch the video: Google

The update also includes a blog post format and dynamic suggestions for report types based on the uploaded material. For example, NotebookLM might recommend a white paper format for research documents. Users can also write their own prompts, up to 1,000 words, to control the tone, style, and format of the generated content.

Google News