Content
summary Summary

OpenAI has released two large language models with open weights for the first time since GPT-2: gpt-oss-120b and gpt-oss-20b.

Ad

Both are specialized reasoning models built with a Mixture-of-Experts architecture, designed for complex logical reasoning, step-by-step problem-solving, and working with external tools like web search or code interpreters. The models are distributed under the Apache-2.0 license.

OpenAI says gpt-oss-120b can run on a single 80 GB GPU, while gpt-oss-20b is aimed at systems with 16 GB of RAM. CEO Sam Altman described the move as part of building a "democratic" AI infrastructure. Both models are available through Hugging Face.

Strong performance in logic, coding, and health

According to OpenAI, gpt-oss-120b delivers benchmark results close to the proprietary o4-mini model and outperforms GPT-4o in many scenarios. The models are particularly strong at tasks that require extended reasoning. On the AIME 2024 math competition, gpt-oss-120b scores 96.6% accuracy when using tools, just behind o4-mini at 98.7%.

Ad
Ad
Bild: OpenAI

The models also perform well on programming challenges. On the Codeforces benchmark, gpt-oss-120b achieves an Elo rating of 2622, approaching o4-mini's 2719. On SWE-bench Verified, the scores are 60% and 62% for the open models (compared to 69% for o4-mini and 68% for o3).

Bild: OpenAI

In healthcare, gpt-oss-120b surpasses many other models on the HealthBench benchmark and nearly matches o3, according to OpenAI.

Bild: OpenAI

It is important to note that these are text-only models. They cannot process or generate images, and their knowledge is current as of June 2024.

The gpt-oss models have some drawbacks when it comes to factual accuracy. According to OpenAI, they are more prone to hallucinations, which is expected for smaller models with less world knowledge.

A new safety protocol for open source AI

OpenAI is introducing a new safety protocol to address the risks posed by open-source models, which can be modified by malicious actors. The centerpiece is a "worst-case fine-tuning" process, where the model is deliberately trained on dangerous capabilities like planning cyberattacks or using biological knowledge.

Recommendation

According to OpenAI's safety paper, even after this process, the model did not reach a "high" risk threshold in the monitored categories. The results were reviewed by OpenAI's internal Safety Advisory Group and external experts. OpenAI concludes that releasing these models does not significantly increase the risk of open source models acquiring dangerous capabilities, since other models like Qwen 3 Thinking and Kimi K2 already offer similar performance.

Despite these safety measures, the Model Card highlights ongoing challenges. The gpt-oss models are less reliable at following instruction hierarchies than o4-mini, and their reasoning chains were intentionally not filtered for "bad thoughts," leaving moderation up to developers. As a result, they may contain unmoderated content.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI has released two large language models with open weights, gpt-oss-120b and gpt-oss-20b, both designed for advanced logical reasoning and complex problem solving, and available under the Apache-2.0 license on Hugging Face.
  • The larger model, gpt-oss-120b, delivers strong results in logic, coding, and health tasks, nearly matching the performance of OpenAI's proprietary o4-mini model but is more susceptible to factual errors and hallucinations.
  • OpenAI has implemented a new safety protocol that includes deliberate fine-tuning on risky tasks; after review, the company determined the models do not significantly raise safety risks compared to existing open models, though moderation is left to developers and some content may remain unfiltered.
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.