AI in practice

Improved AI model boosts GitHub Copilot's code generation capabilities

Matthias Bastian

Github

GitHub Copilot is getting an upgrade with an improved AI model and enhanced contextual filtering, resulting in faster and more tailored code suggestions for developers.

The new AI model delivers a 13% improvement in latency, while enhanced contextual filtering delivers a 6% relative improvement in code acceptance. These improvements are coming to GitHub Copilot for Individuals and GitHub Copilot for Business.

According to Github, the new model was developed together with OpenAI and Azure AI, and the 13% improvement in latency means that GitHub Copilot generates code suggestions for developers faster than ever before, promising a significant increase in overall productivity.

Improved context filtering considers a wider range of a developer's context and usage patterns. It filters prompts and code suggestions more intelligently, so developers get more relevant code suggestions for their specific coding tasks, resulting in a +6% relative improvement in code acceptance rates.

"With these improvements, developers can expect to stay in the flow and work more efficiently than ever, leading to faster innovation with better code," Github writes in the announcement post, which is now offline but still visible via Waybackmachine.  More updates will be announced at the upcoming GitHub Universe event.

Copilot's switch to GPT-4

Copilot's biggest update this year came in March, when Microsoft and Github upgraded the AI coding tool to GPT-4, abandoning OpenAI's specific code model "Codex," which was based on GPT-3. The switch brought "significant gains in reasoning and code generation," according to Github.

In October 2022, Github CEO Thomas Dohmke said he expects 80 percent of code to be generated in just five years, claiming that 40 percent of the code in Github's Copilot beta was generated, giving developers a 55 percent speed boost.