Ad
Skip to content

Jonathan Kemper

Jonathan is a freelance tech journalist, author, and media coach. He covers open AI models, Chinese AI labs, and the technical underpinnings that make them tick – from novel training methods and scaling strategies to benchmark design. He's also an avid vibecoder.
Read full article about: Claude Code routines let AI fix bugs and review code on autopilot

Anthropic has introduced "routines" for Claude Code - automated processes that can independently fix bugs, review pull requests, or respond to events without needing a user's local machine. Routines are configured once and then run on a schedule, via API call, or in response to GitHub events on Anthropic's web infrastructure. Typical use cases include nightly bug triage, automatic code reviews based on team-specific checklists, porting changes between languages, and checking deployments for errors.

Routines tap into existing repository connections and connectors. The feature is available as a research preview for Pro, Max, Team, and Enterprise plans, with daily limits of 5 to 25 runs depending on the plan. Support for webhook sources beyond GitHub is planned.

Screenshot of the Claude Code interface for creating a new routine with fields for name and task description, model selection Opus 4.6, a linked repository, three trigger options (Schedule, GitHub event, API) and connectors for Slack and Asana.
Users assign a name, describe the task, select a trigger (schedule, GitHub event, or API), and connect external services like Slack or Asana.

Routines follow a series of recent desktop updates. Anthropic recently added features that let Claude Code start development servers, display web apps, and fix errors on its own, then shipped the /loop command for local, scheduled background tasks. With Routines, that same automation now moves to the cloud.

Arcee AI spent half its venture capital to build an open reasoning model that rivals Claude Opus in agent tasks

US start-up Arcee AI spent roughly half its total venture capital to train Trinity-Large-Thinking, an open reasoning model with 400 billion parameters designed to take on Claude Opus in agent tasks.

Google's Gemma 4 puts free agentic AI on your phone and no data ever leaves the device

Google’s new open-source model, Gemma 4, processes text, images, and audio completely on-device. Using agent skills, the AI can independently tap into tools like Wikipedia or interactive maps; no cloud required.

Alibaba's Qwen team built HopChain to fix how AI vision models fall apart during multi-step reasoning

When AI models reason about images, small perceptual errors compound across multiple steps and produce wrong answers. Alibaba’s HopChain framework tackles this by generating multi-stage image questions that break complex problems into linked individual steps, forcing models to verify each visual detail before drawing conclusions. The approach improves 20 out of 24 benchmarks.

Alibaba's Qwen team makes AI models think deeper with new algorithm

Reinforcement learning hits a wall with reasoning models because every token gets the same reward. A new algorithm from Alibaba’s Qwen team fixes this by weighting each step based on how much it shapes what comes next, doubling the length of thought processes in the process.