Recently, OpenAI announced the ChatGPT team plan with support for "GPT-4 32K". Support for GPT-4 32K was surprising, as there is already GPT-4 Turbo with 128K, a newer model that has been in use since November and offers a four times larger context window. The context window indicates how much information the model can process simultaneously. According to an OpenAI spokesperson, the information in the announcement is correct: the models in ChatGPT Teams and Plus are 32K versions of the Turbo model. OpenAI refers to GPT-4 in the announcement but means GPT-4 Turbo. OpenAI only differentiates between turbo and non-turbo for the API. The 128K Turbo model is available in the Enterprise version for ChatGPT and via the API.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.