Ad
Skip to content

Google engineer says Claude Code built in one hour what her team spent a year on

Image description
Sora prompted by THE DECODER

Key Points

  • Google Principal Engineer Jaana Dogan reports that Anthropic's Claude Code generated a distributed agent orchestration system in just one hour—a problem Google had been working on since last year.
  • While the result isn't perfect, Dogan notes it's comparable to what Google built previously, highlighting the rapid advancement in AI-assisted coding capabilities.
  • Claude Code creator Boris Cherny suggests enabling the tool to self-check its work, a feedback loop that can double or triple the output quality.

Update, January 4, 2026:

After her comments attracted significant attention, Dogan clarifies her original assessment. To "cut through the noise," as she puts it: Google has built several versions of the system over the past year. There are tradeoffs and there hasn't been a clear winner. When prompted with the best ideas that survived, coding agents can generate a decent toy version in about an hour.

"What I built this weekend isn't production grade and is a toy version, but a useful starting point," Dogan continues. "I am surprised with the quality of what's generated in the end because I didn't prompt in depth about design choices yet CC was able to give me some good recommendations."

Dogan emphasizes that it takes years to learn and ground ideas in products, then come up with patterns that last for a long time. Once you have that insight and knowledge, building isn't that hard anymore. "It's totally trivial today to take your knowledge and build it again, which wasn't possible in the past," she writes. According to Dogan, because you can build from scratch, the final artifacts are free from baggage.

Ad
DEC_D_Incontent-1

Original article, January 3, 2026:

A senior Google engineer says Anthropic's Claude Code generated a working system in one hour that her team has been developing since last year.

Jaana Dogan, Principal Engineer at Google responsible for the Gemini API, wrote on X that she gave Claude Code a problem description and got back a result in one hour that matches what her team has been building for the past year. The task involved distributed agent orchestrators: systems that coordinate multiple AI agents. According to Dogan, Google had explored various approaches to this problem without reaching consensus.

The prompt wasn't particularly detailed, just three paragraphs, Dogan explained when asked. She built a simplified version based on existing ideas to test Claude Code, since she couldn't use internal company details.

Ad
DEC_D_Incontent-2

The output isn't perfect and needs refinement, Dogan admits. She recommends that skeptics of coding agents try them in areas where they have deep expertise.

Jaana Dogan via X

When asked whether Google uses Claude Code, Dogan said it's only allowed for open-source projects, not internal work. One user asked when Gemini would reach this level. Dogan's response: "We are working hard right now. The models and the harness."

This industry has never been a zero-sum game, she adds, so giving competitors credit where it's due makes sense. "Claude Code is impressive work, I'm excited and more motivated to push us all forward."

AI coding tools have advanced faster than anyone predicted

Dogan also outlined the rapid evolution of AI-assisted programming: in 2022, systems could complete individual lines. In 2023, they handled entire sections. By 2024, they could work across multiple files and build simple apps. In 2025, they can create and restructure entire codebases.

Back in 2022, she didn't believe the 2024 milestone could be practically feasible to scale as a global developer product. In 2023, today's level seemed five years away. "Quality and efficiency gains in this domain are beyond what anyone could have imagined so far," she wrote.

Claude Code creator shares workflow tips

Around the same time, Boris Cherny, the creator of Claude Code, published his tips for using the tool. His top recommendation: give Claude a way to verify its own work. This feedback loop doubles or triples the quality of the final output.

Cherny suggests starting most sessions in plan mode and iterating with Claude until the plan is solid. After that, Claude can usually finish the task in one pass. For recurring workflows, he uses slash commands and subagents that automate specific tasks like simplifying code or testing the app.

For longer tasks, Cherny runs background agents that review Claude's work when it's done. He also runs multiple Claude instances in parallel to tackle different tasks simultaneously. His default model is Opus 4.5.

During code reviews, Cherny's team tags Claude directly in colleagues' pull requests to add documentation. Claude Code also integrates with external tools like Slack, BigQuery for data analysis, and Sentry for error logs, Cherny says.

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.