Andrej Karpathy says humans are now the bottleneck in AI research with easy-to-measure results
Karpathy spent months hand-tuning his GPT-2 training setup. Then he let an autonomous agent take over for a single night. The agent discovered fine-grained adjustments Karpathy had overlooked, tweaks that also interact with each other in ways that are easy for a human to miss but straightforward for a systematic search to catch.
Karpathy's takeaway is that researchers should remove themselves from the loop, at least in areas where objective metrics exist. "To get the most out of the tools that have become available now, you have to remove yourself as the bottleneck. You can't be there to prompt the next thing," Karpathy says. Researchers at major AI labs, he argues, place too much unfounded trust in their own intuition and are ultimately in the process of systematically automating themselves out of a job. Which, Karpathy notes, is also their stated goal.
While models keep getting better at coding and other easy-to-verify tasks, Karpathy doesn't think these gains will carry over smoothly to less measurable domains. "Anything that feels softer is, like, worse," he says.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now