Content
summary Summary

Former OpenAI researcher and Tesla executive Andrej Karpathy argues that schools should stop trying to police AI-generated homework.

Ad

In his view, detecting AI-written text has already failed, and the entire approach to evaluating student work needs a fundamental reset.

Karpathy's core argument is simple: educators can't put the technology back in the bottle. He says schools should assume that any work done outside the classroom has used AI.

Technical detection tools won't fix that. "You will never be able to detect the use of AI in homework. Full stop," Karpathy wrote on X.

Ad
Ad

He argues that current detectors don't really work, can be defeated in various ways, and are "in principle doomed to fail." A recent study, however, showed that at least one AI text detector worked reliably under the conditions of this specific test.

But Karpathy is right that even if detection works partially, it isn't enough. Policing usage creates stress for teachers and students and encourages a culture of cheating - an approach that makes little sense in a world where children grow up with AI.

As a result, Karpathy believes the majority of grading has to shift to in-class work, where teachers can physically monitor students. This change keeps students motivated to learn how to solve problems without AI, since they know they will be evaluated without it later.

Karpathy stresses that he's not pushing for an anti-tech school system. Students should learn to use AI because it is "here to stay and it is extremely powerful." As he puts it, no one wants students to be "naked in the world" without access to it.

He compares AI to calculators. Even though calculators are pervasive and speed up work, schools still teach basic math so students can do it by hand. Students need to understand the principles well enough to verify when a tool gets something wrong - like when a bad prompt generates a wrong answer.

Recommendation

That verification ability is especially important with AI, Karpathy notes, because today's models are "a lot more fallible in a great variety of ways" compared to calculators.

Flipping the classroom for the AI era

For Karpathy, the logical outcome is a shift to a model where testing happens face-to-face. While teachers retain discretion over the exact setup - choosing between no tools, cheat sheets, open book, or direct AI access - the evaluation setting moves to the classroom.

Karpathy frames the goal as dual competency: students should be proficient in using AI, but "can also exist without it." In his view, the only way to get there is to "flip classes around" and move the majority of testing to in-class settings.

He recently launched a startup called Eureka Labs to work on this intersection of AI and education. The company plans to build an "AI-native" school where human teachers design course content, and AI assistants scale it and guide students individually.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Andrej Karpathy, former OpenAI researcher, says technical measures to detect AI-generated homework have not worked and there is currently no reliable way to prove if AI was used.
  • He advocates for evaluating student performance mainly in the classroom, so teachers can maintain control and ensure fair assessment.
  • Karpathy suggests using a "flipped classroom" approach: students learn and practice with AI at home, but exams are conducted in school.
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.