Programmers who rely on AI assistants tend to ask fewer questions and learn more superficially, according to new research from Saarland University. A team led by Sven Apel found that students were less critical of the code suggestions they received when working with tools like GitHub Copilot. In contrast, pairs of human programmers asked more questions, explored alternatives, and learned more from one another.
In the experiment, 19 students worked in pairs: six in human-only teams and seven in human-AI teams. According to Apel, many of the AI-assisted participants simply accepted code suggestions because they assumed the AI's output was already correct. He noted that this habit can introduce mistakes that later require significant effort to fix. Apel said AI tools can be helpful for straightforward tasks, but complex problems still benefit from real collaboration between humans.
