95% of UK students now use AI and their experiences couldn't be more divided
Key Points
- According to a recent HEPI survey, 95 percent of UK undergraduates now use generative AI, with a growing number incorporating AI-generated text directly into exam submissions.
- Student opinion is split: roughly half see AI as beneficial, while others fear losing independent thinking skills. A medical student case study found that those relying solely on AI without human feedback performed the worst, yet felt the most confident.
- Most students consider AI skills essential, but fewer than half feel supported by their lecturers, with clear disparities across disciplines, socioeconomic backgrounds, and gender.
Generative AI is practically universal among British students. But a new survey reveals a major gap between how students use AI and how well universities support them.
In just three years, generative AI has gone from novelty to standard practice at UK universities. According to the Higher Education Policy Institute's (HEPI) Student Generative AI Survey 2026, 95 percent of full-time undergraduates now use AI in at least one form.
In 2024, that number was 66 percent. The survey is based on responses from 1,054 students surveyed in December 2025. The question is no longer whether students use AI, but how well they use it, the authors write.

AI-generated text in exams quadruples as cheating fears grow
The vast majority of students use generative AI for assessed work: having concepts explained, summarizing material, and structuring ideas. About a third also use AI as a search engine. The share of students inserting AI-generated text directly into exam papers has quadrupled since 2024. At the same time, the use of general tools like ChatGPT is declining slightly, which the authors attribute to more specialized tools gaining ground.
Nearly two-thirds of students say exam formats have changed significantly. In free-text responses, several express fear of being falsely accused of cheating. "Constant worry that my work will flag AI detection, even though I have never used AI to write an assignment," one student wrote.
Students split between deeper learning and intellectual dependency
The survey reveals a deeply divided student body. Just under half say AI has improved their study experience, but others worry about fairness, losing their own skills, and what AI means for the job market. Some feel creative subjects are being devalued.
Two quotes from the study capture the divide. One student says AI has helped them "focus on critical analysis and deeper understanding" by "saving hours of tedious work." Another puts it bluntly: "I'm not using my brain at all."
When it comes to sources, students split into three roughly equal groups: those relying on traditional sources, those using AI, or those combining both. Nearly one in ten barely consults traditional sources anymore.
Around 15 percent of students use AI for companionship, advice, or to combat loneliness, according to the report. Some turn to purely AI-based therapy services. Overall, four in ten say AI affects their feelings of loneliness, with positive and negative effects roughly balanced. "It's because it's like having a friend close by," one student wrote. Another: "Just feel isolated."

Most students see AI skills as essential, but fewer than half feel supported
More than two-thirds of students consider AI skills essential, yet fewer than half feel supported by their lecturers. Only about a third say their university actively encourages them to use AI.
The availability of AI tools has multiplied since 2024, and significantly fewer universities now ban AI than the year before. Russell Group universities, which lagged in 2025, are now the most likely to encourage students to use AI.
But persistent inequalities remain across subject, background, and gender. Humanities students are significantly more skeptical and feel particularly unsupported. Students from wealthier households use AI more often, while male students tend to have more prior experience. A third of students arrive at university with no AI experience at all. Environmental concerns play a role, too. Nearly a quarter say the ecological impact of AI keeps them from using it.
Medical students using AI without oversight performed worst but felt most confident
A case study from Queen Mary University of London found that medical students who only used AI without human feedback during clinical exercises performed the worst, but were the most confident in their abilities. Professor Rakesh Patel compares this to giving students a sports car before they've learned to drive. As a counterexample, the report highlights Aston University, which made AI training mandatory across all programs and provided all staff with AI tools as early as 2023.
The HEPI report recommends introducing first-year students to AI in a structured way, creating clear exam guidelines with both AI-free and AI-supported formats, providing tools for all students, and conducting targeted research into how AI affects loneliness and mental well-being.
The debate around AI in education has been intensifying for months. Last year, an Anthropic study showed that in nearly half of all analyzed conversations with the AI assistant Claude, students outsourced higher-order thinking like analysis and creation to the AI.
At the same time, AI companies are pushing into higher education. Anthropic launched "Claude for Education" with a dedicated learning mode, while OpenAI has been courting students with "ChatGPT Edu" since 2024, aiming to lock in users early and keep them as they move into the workforce.
AI pioneer Andrej Karpathy recently argued that schools need a fundamental rethink: assume all work done outside the classroom was created with AI, and shift exams entirely to in-person settings.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now