Sam Altman, OpenAI's CEO, is exploring the company's next steps in open source development, turning to X for user feedback on potential directions. This move comes amid significant changes at the company, which is transforming its for-profit division into a public benefit corporation.
OpenAI's relationship with open source has evolved considerably since receiving Microsoft's investment. The company saw the departure of many former executives and largely stepped back from open source after GPT-4's release, limiting its open source contributions to smaller projects like Whisper. At the time, Altman cited security concerns for this retreat. Recently, however, he acknowledged that this strategy may have been misguided, with the admission coming as competitors like Deepseek released their V3 and R1 models.
o3-mini or smartphone-model?
Now there's a sign of life: "For our next open source project, would it be more useful to do an o3-mini level model that is pretty small but still needs to run on GPUs, or the best phone-sized model we can do?" asked Altman on X. Currently, an o3-mini model is leading the poll with just over 12 hours to go.
for our next open source project, would it be more useful to do an o3-mini level model that is pretty small but still needs to run on GPUs, or the best phone-sized model we can do?
- Sam Altman (@sama) February 18, 2025
While ChatGPT and OpenAI's API services maintain their position as industry leaders - with ChatGPT holding a significant lead - open source competitors have gained ground. Meta, Deepseek (High-Flyer), Alibaba, and Mistral now offer open source models that compete with OpenAI's offerings. xAI plans to release Grok 2 as open source after launching Grok 3. An open source o3-mini would provide a strong alternative without competing directly with OpenAI's premium offerings, as GPT-4.5 undergoes testing and GPT-5 prepares for release later this year with the larger o3 model.
A return to original principles?
This move represents less a return to original principles and more an acknowledgment that a completely closed approach is unsustainable given rapid competitive advances.
Jan Leike, who left OpenAI and joined Anthropic after criticizing OpenAI's safety practices, recently expressed concerns about the company's restructuring. He argued that replacing its original mission of "ensuring that AGI benefits all of humanity" with "much less ambitious charitable initiatives in sectors such as health care, education, and science" misses the mark. Instead, Leike suggests that the nonprofit should support initiatives that develop AI for broader benefit, including AI governance, security and adaptation research, and addressing labor market impacts.
Perhaps an open source release could be a middle ground, allowing security researchers to better understand what the reasoning models are doing.