Ad
Skip to content
Read full article about: Adobe turns its creative suite into a chatbot with the new Firefly AI Assistant

Adobe is introducing the Firefly AI Assistant, which handles complex creative workflows across Photoshop, Illustrator, Premiere, and Lightroom through a single chat interface. Users describe what they want in plain language, and the assistant runs through the necessary steps automatically, though they can jump in and make changes at any point.

"Creative Skills" lets users kick off multi-step processes with a single command, like adapting an image for multiple social media platforms at once. Adobe also plans to connect it to chat platforms like Anthropic's Claude. A public beta is expected to ship in the coming weeks. The assistant builds on "Project Moonlight," a prototype Adobe demoed at Adobe MAX.

Adobe is also expanding Firefly with AI-powered video and image editing tools, including audio cleanup, advanced color controls, and image adjustments. The platform now supports more than 30 AI models, including Kling 3.0.

Read full article about: OpenAI updates Agents SDK with new sandbox support for safer AI agents

OpenAI has shipped a major update to its Agents SDK. The kit gives developers building blocks for AI agents that can check files, run commands, edit code, and handle longer tasks in protected environments. It bundles tool usage via the Model Context Protocol (MCP), code execution through a shell tool, file editing with an apply-patch tool, and custom instructions through AGENTS.md files. A manifest function describes the workspace and supports local files as well as cloud storage like AWS S3, Google Cloud Storage, and Azure Blob Storage.

Schematische Darstellung der Architektur des OpenAI Agents SDK mit Verbindungen zwischen Nutzereingabe, Modell, Tools und Orchestrierung.
The Agents SDK connects user input, AI models, and tools into a single framework for building AI agents. | Image: OpenAI

The biggest addition is native sandbox support. Agents now run in isolated environments with their own files, tools, and dependencies. The SDK works with providers like Cloudflare, Vercel, E2B, and Modal, and developers can plug in their own sandboxes too. OpenAI says separating control logic from the computing environment should make agents more secure, stable, and easier to scale. If something breaks, the agent can pick up where it left off in a fresh container. The new features are available in Python today, with TypeScript on the way. Standard OpenAI API pricing applies.

Read full article about: Google ships its most expressive Gemini 3.1 text-to-speech model yet with 70+ language support

Google is rolling out its new text-to-speech model based on Gemini 3.1 Flash. The company says it's the most natural and expressive voice output it has shipped to date. The big new feature is audio tags—simple text commands that let developers control the style, tempo, tone, and accent of the generated speech. The model supports over 70 languages and can handle multi-speaker dialogs.

On the Artificial Analysis ranking list, the model hits an Elo rating of 1,211 and stands out for its quality-to-price ratio. It beats Elevenlabs v3 in overall quality and sits just behind Inworld 1.5 Max.

Gemini 3.1 Flash TTS ranks among the top text-to-speech models for both quality and value. | Image: Google

Gemini 3.1 Flash TTS has a free tier, but Google uses the data to improve its products. The paid tier runs $1.00 per million tokens for text input and $20.00 per million tokens for audio output. Batch mode cuts those prices in half to $0.50 and $10.00, respectively. On the paid tier, Google doesn't use the data for product improvement.

Gemini 3.1 Flash TTS is available as a preview through the Gemini API, Vertex AI for enterprise users, and Google Vids for Workspace users. Anyone can try it for free in Google's AI Studio. All generated audio is tagged with Google's SynthID watermark to flag AI-generated content.

Read full article about: Microsoft Copilot in Word can now track changes and manage comments

Microsoft is adding new Copilot features to Word that target professionals in legal, finance, and compliance.

Copilot can now track changes at the word level, so edits stay transparent and easy to review. Users can also manage comments directly in the text, insert tables of contents, and set up headers and footers with dynamic fields like page numbers.

For multi-step edits, Copilot now shows what it's working on in real time. Microsoft says the features run on "Work IQ," a layer that adapts responses based on the user and their organization. Data stays within Microsoft 365's existing security boundaries. For now, the new features are only available on Windows desktop through the Office Insiders Beta Channel's Frontier program. Web and Mac support will follow.

Just a few days ago, Anthropic released a similar plugin for Word based on its Claude chatbot.

Read full article about: OpenAI's European Stargate plans shrink as Microsoft and Google take over capacity

Back in July 2025, OpenAI CEO Sam Altman expressed confidence that the conditions were right to bring Stargate to Narvik, Norway. Just a few months later, that optimism has largely evaporated. OpenAI hasn't closed the deal for the Norwegian data center near the Arctic Circle, nor is it sticking with its UK Stargate project. Both sites were developed by neocloud provider Nscale.

Microsoft is stepping in, leasing 30,000 Nvidia Vera Rubin chips at the Narvik facility on top of an existing $6.2 billion deal. The London Nscale data center is going to Google, according to Bloomberg. OpenAI's once-sweeping infrastructure promise of $1.4 trillion has shrunk to a more concrete forecast of $600 billion by 2030.

Read full article about: OpenAI's GPT-5.4 Pro reportedly solves a longstanding open Erdős math problem in under two hours

OpenAI's GPT-5.4 Pro model has apparently solved Erdős open math problem #1196. The model reportedly found the solution in about 80 minutes and prepared it as a LaTeX paper in another 30. Formal verification is underway.

Mathematician Terence Tao commented in the Erdős Problems forum that the work reveals a previously undescribed connection between the anatomy of integers and Markov process theory. "That would be a meaningful contribution to the anatomy of integers that goes well beyond the solution of this particular Erdos problem," Tao writes. Kevin Barreto, who says he'll soon join OpenAI's AI for Science team, noted in the same forum that the Markov chain technique the model used was a creative step human mathematicians had overlooked despite years of work on the problem.

The discussion is interesting because there's an ongoing debate about whether LLMs can discover new knowledge in mathematics and other disciplines that goes beyond the data points learned during training. This example shows that new, previously undescribed knowledge can also be hidden within already known data points.

Greg Brockman predicts AI will let small teams match the output of large ones if they can afford the compute

In the future, working with AI won’t mean adapting to the computer—the computer will adapt to you, says OpenAI President Greg Brockman. “This is disruptive. Institutions will change.”