Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: OpenAI acquires Astral to bring Python's most popular dev tools into its Codex AI coding platform

OpenAI is acquiring Astral, the company behind the widely used Python tools Ruff, uv, and ty. Astral founder Charlie Marsh announced that his team is joining OpenAI's Codex team, the company's platform for agentic AI coding. According to Marsh, Astral's tools are downloaded hundreds of millions of times each month and have become a core part of modern Python development. Marsh says integrating them with Codex gives both projects the most room to grow. Astral was backed by Accel and Andreessen Horowitz, among others.

Our goal with Codex is to move beyond AI that simply generates code and toward systems that can participate in the entire development workflow—helping plan changes, modify codebases, run tools, verify results, and maintain software over time. Astral’s developer tools sit directly in that workflow.

OpenAI

OpenAI says it will keep the tools open source after the acquisition closes. Astral's Douglas Creager wrote on Hacker News that the tools are under a permissive license, so in a worst-case scenario, the community could fork the software and continue developing it independently.

No one can guarantee how motives, incentives, and decisions might change years down the line. But that's why we bake optionality into it with the tools being permissively licensed.

Douglas Creager
Read full article about: Anthropic turns Claude Code into an always-on AI agent with new channels feature

Anthropic's Claude Code now supports "channels," letting messages, notifications, and webhooks flow directly into a running session. Claude can respond to events even when the user isn't at the terminal, whether that's CI results, chat messages, or monitoring alerts.

Channels run through MCP servers and support two-way communication: Claude reads an incoming message and responds through the same channel. The research preview supports Telegram and Discord, and developers can build their own custom channels. The feature moves Anthropic's tooling closer to the AI agent hype around OpenClaw.

The feature requires Claude Code version 2.1.80 or later and a claude.ai login; API keys aren't supported. Teams and Enterprise organizations need to explicitly enable channels. Full details are in the official documentation.

Read full article about: Google AI Studio now lets you vibe code real-time multiplayer games

Google has launched a new vibe coding feature in Google AI Studio that lets non-programmers and programmers alike turn ideas into working apps using natural language. Users describe what they want, and Gemini 3.1 Pro handles the technical implementation. Apps are built directly in the browser and can handle things like payments, data storage, or messaging. Google says even multiplayer applications like real-time games are possible.

A new "Antigravity Agent" automatically detects when an app needs a database or login system and sets both up through Firebase. Third-party services like payment providers or Google Maps can be connected using API keys. When needed, the agent also installs web tools like Framer Motion or Shadcn on its own. In addition to React and Angular, the platform now supports Next.js as well.

Read full article about: Google Labs turns Stitch into a full AI design platform that converts plain text into user interfaces

Google Labs has turned its design tool Stitch into a full AI-powered software design platform. The tool lets users generate user interfaces from natural language prompts, an approach Google is calling "vibe design." Instead of starting with traditional wireframes, users simply describe what they want the experience to look and feel like. Stitch provides an infinite canvas where images, text, and code can all be dropped in as context.

A new design agent analyzes the entire project and can explore multiple ideas at the same time. Users can make real-time changes directly on the canvas using voice control. Design rules can be shared across tools through a new DESIGN.md format, and static designs get converted straight into clickable prototypes.

Stitch is live at stitch.withgoogle.com for users 18 and older in every region where Gemini is available. Developers can also plug it into tools like AI Studio via an MCP server and an SDK. Google is pitching the tool at both professional designers and founders who have no design background.

Read full article about: Google Deepmind upgrades Gemini API with multi-tool chaining and context circulation

Google Deepmind has expanded the Gemini API with several new tools for developers. Built-in tools like Google Search and Google Maps can now be combined with custom functions in a single request. Previously, developers had to handle each step separately, which was slower and more cumbersome.

Results from one tool can now be automatically passed to another through what Google calls context circulation. Each tool call also gets a unique ID, making it easier to track down bugs.

Moreover, Google Maps is now available as a data source for the Gemini 3 model family, providing location data, business information, and commute times. Google recommends the new Interactions API for building these workflows.