Ad
Skip to content
Read full article about: Adobe turns its creative suite into a chatbot with the new Firefly AI Assistant

Adobe is introducing the Firefly AI Assistant, which handles complex creative workflows across Photoshop, Illustrator, Premiere, and Lightroom through a single chat interface. Users describe what they want in plain language, and the assistant runs through the necessary steps automatically, though they can jump in and make changes at any point.

"Creative Skills" lets users kick off multi-step processes with a single command, like adapting an image for multiple social media platforms at once. Adobe also plans to connect it to chat platforms like Anthropic's Claude. A public beta is expected to ship in the coming weeks. The assistant builds on "Project Moonlight," a prototype Adobe demoed at Adobe MAX.

Adobe is also expanding Firefly with AI-powered video and image editing tools, including audio cleanup, advanced color controls, and image adjustments. The platform now supports more than 30 AI models, including Kling 3.0.

Read full article about: OpenAI updates Agents SDK with new sandbox support for safer AI agents

OpenAI has shipped a major update to its Agents SDK. The kit gives developers building blocks for AI agents that can check files, run commands, edit code, and handle longer tasks in protected environments. It bundles tool usage via the Model Context Protocol (MCP), code execution through a shell tool, file editing with an apply-patch tool, and custom instructions through AGENTS.md files. A manifest function describes the workspace and supports local files as well as cloud storage like AWS S3, Google Cloud Storage, and Azure Blob Storage.

Schematische Darstellung der Architektur des OpenAI Agents SDK mit Verbindungen zwischen Nutzereingabe, Modell, Tools und Orchestrierung.
The Agents SDK connects user input, AI models, and tools into a single framework for building AI agents. | Image: OpenAI

The biggest addition is native sandbox support. Agents now run in isolated environments with their own files, tools, and dependencies. The SDK works with providers like Cloudflare, Vercel, E2B, and Modal, and developers can plug in their own sandboxes too. OpenAI says separating control logic from the computing environment should make agents more secure, stable, and easier to scale. If something breaks, the agent can pick up where it left off in a fresh container. The new features are available in Python today, with TypeScript on the way. Standard OpenAI API pricing applies.

Read full article about: Google ships its most expressive Gemini 3.1 text-to-speech model yet with 70+ language support

Google is rolling out its new text-to-speech model based on Gemini 3.1 Flash. The company says it's the most natural and expressive voice output it has shipped to date. The big new feature is audio tags—simple text commands that let developers control the style, tempo, tone, and accent of the generated speech. The model supports over 70 languages and can handle multi-speaker dialogs.

On the Artificial Analysis ranking list, the model hits an Elo rating of 1,211 and stands out for its quality-to-price ratio. It beats Elevenlabs v3 in overall quality and sits just behind Inworld 1.5 Max.

Gemini 3.1 Flash TTS ranks among the top text-to-speech models for both quality and value. | Image: Google

Gemini 3.1 Flash TTS has a free tier, but Google uses the data to improve its products. The paid tier runs $1.00 per million tokens for text input and $20.00 per million tokens for audio output. Batch mode cuts those prices in half to $0.50 and $10.00, respectively. On the paid tier, Google doesn't use the data for product improvement.

Gemini 3.1 Flash TTS is available as a preview through the Gemini API, Vertex AI for enterprise users, and Google Vids for Workspace users. Anyone can try it for free in Google's AI Studio. All generated audio is tagged with Google's SynthID watermark to flag AI-generated content.

Read full article about: Microsoft Copilot in Word can now track changes and manage comments

Microsoft is adding new Copilot features to Word that target professionals in legal, finance, and compliance.

Copilot can now track changes at the word level, so edits stay transparent and easy to review. Users can also manage comments directly in the text, insert tables of contents, and set up headers and footers with dynamic fields like page numbers.

For multi-step edits, Copilot now shows what it's working on in real time. Microsoft says the features run on "Work IQ," a layer that adapts responses based on the user and their organization. Data stays within Microsoft 365's existing security boundaries. For now, the new features are only available on Windows desktop through the Office Insiders Beta Channel's Frontier program. Web and Mac support will follow.

Just a few days ago, Anthropic released a similar plugin for Word based on its Claude chatbot.

Read full article about: OpenAI's GPT-5.4 Pro reportedly solves a longstanding open Erdős math problem in under two hours

OpenAI's GPT-5.4 Pro model has apparently solved Erdős open math problem #1196. The model reportedly found the solution in about 80 minutes and prepared it as a LaTeX paper in another 30. Formal verification is underway.

Mathematician Terence Tao commented in the Erdős Problems forum that the work reveals a previously undescribed connection between the anatomy of integers and Markov process theory. "That would be a meaningful contribution to the anatomy of integers that goes well beyond the solution of this particular Erdos problem," Tao writes. Kevin Barreto, who says he'll soon join OpenAI's AI for Science team, noted in the same forum that the Markov chain technique the model used was a creative step human mathematicians had overlooked despite years of work on the problem.

The discussion is interesting because there's an ongoing debate about whether LLMs can discover new knowledge in mathematics and other disciplines that goes beyond the data points learned during training. This example shows that new, previously undescribed knowledge can also be hidden within already known data points.

Read full article about: Google Chrome's new "Skills" feature lets you save AI prompts and reuse them with a single click

Google is rolling out "Skills," a new Chrome feature that lets users save frequently used AI prompts and reuse them with a single click. Previously, users had to manually re-enter the same prompt each time, for example, converting recipes to vegan alternatives, to cite one of Google's examples.

With Skills, prompts like these can be saved directly from the chat history and pulled up in Chrome by typing a slash ( / ) or plus sign ( + ) in Gemini. The feature works across multiple tabs. Google also offers a library of ready-made skills for things like product comparisons, meal planning, and gift selection. Users can customize these or build their own from scratch.

According to Google, Skills uses Chrome's existing security and privacy features and asks for permission before performing certain actions like sending emails. The feature is rolling out now on Mac, Windows, and ChromeOS for users with their Chrome language set to English-US.

Claude Mythos can autonomously compromise weakly defended enterprise networks end-to-end

The UK’s AI Safety Institute tested Anthropic’s Claude Mythos Preview for cyber capabilities. For the first time, an AI model autonomously completed a full attack simulation against a corporate network, but the results come with significant caveats.

Read full article about: Claude Code routines let AI fix bugs and review code on autopilot

Anthropic has introduced "routines" for Claude Code - automated processes that can independently fix bugs, review pull requests, or respond to events without needing a user's local machine. Routines are configured once and then run on a schedule, via API call, or in response to GitHub events on Anthropic's web infrastructure. Typical use cases include nightly bug triage, automatic code reviews based on team-specific checklists, porting changes between languages, and checking deployments for errors.

Routines tap into existing repository connections and connectors. The feature is available as a research preview for Pro, Max, Team, and Enterprise plans, with daily limits of 5 to 25 runs depending on the plan. Support for webhook sources beyond GitHub is planned.

Screenshot of the Claude Code interface for creating a new routine with fields for name and task description, model selection Opus 4.6, a linked repository, three trigger options (Schedule, GitHub event, API) and connectors for Slack and Asana.
Users assign a name, describe the task, select a trigger (schedule, GitHub event, or API), and connect external services like Slack or Asana.

Routines follow a series of recent desktop updates. Anthropic recently added features that let Claude Code start development servers, display web apps, and fix errors on its own, then shipped the /loop command for local, scheduled background tasks. With Routines, that same automation now moves to the cloud.