Ad
Skip to content

OpenAI releases improved GPT-4 model for ChatGPT and via API

Image description
OpenAI

Key Points

  • OpenAI has released an updated GPT-4 model that performs better on benchmarks measuring mathematical problem solving, answering challenging questions, reading comprehension, and code generation. The model is less verbose and is now available in ChatGPT and API.
  • The new GPT-4 Turbo model with image processing is now generally available in the OpenAI API. It can analyze both text and images in a single API call, making it more intelligent and multimodal than previous versions.
  • The updated model supports JSON mode and function calls for vision requests, simplifying integration into applications and developer workflows.

Update from April 12, 2024:

The updated GPT-4 is now available in ChatGPT. The model is expected to be mainly better at math and coding and also less verbose.

ChatGPT won't babble you to death anymore. | Image: OpenAI

The benchmarks show this as follows:

  • MATH (Measuring Mathematical Problem-Solving With the MATH Dataset): +8.9%, measures the ability to solve mathematical problems.
  • GPQA (A Graduate-Level Google-Proof Q&A Benchmark): +7.9%, measures the ability to answer challenging questions that cannot be answered by simple Google searches.
  • MGSM (Multilingual Grade School Math Benchmark): +4.5%, measures the ability to solve elementary school level math problems in multiple languages.
  • DROP (A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs): +4.5%, measures reading comprehension and the ability to draw discrete conclusions from paragraphs.
  • HumanEval (Evaluating Large Language Models Trained on Code): +1.6%, measures the ability to understand and generate code based on human evaluations.
  • MMLU (Measuring Massive Multitask Language Understanding): +1.3%, measures the understanding and ability to solve tasks from various domains.

Original article from April 9, 2024:

Ad
DEC_D_Incontent-1

OpenAI has announced improvements to the GPT-4 Turbo model in its API, which will soon be available in ChatGPT.

The new multimodal GPT-4 Turbo model (2024-04-09, cut-off month December 2023) with image processing is now generally available in the API.

The model is "more intelligent and multimodal", according to OpenAI, and can analyze both text and images and draw conclusions with just one API call. Previously, developers had to use separate models for this.

Vision requests now also support common API features such as JSON mode and function calls, which should make it easier to integrate the model into applications and developer workflows.

Ad
DEC_D_Incontent-2

OpenAI demonstrates the capabilities of the new model with use cases such as tldraw, which writes code based on an interface drawing.

Video: Roberto Nickson

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

Source: OpenAI via X | OpenAI