Update March 5, 2025:
OpenAI's usage-based pricing approach is taking shape. CEO Sam Altman shared a concept on X that would convert current $20 subscription plans into credits. Users could then spend these credits on specific features like Deep Research, o1, and GPT-4.5. When credits run out, users would need to purchase more.
The proposal has met with considerable skepticism from the ChatGPT community on X. Many users express concern about what they call "credit anxiety" - the tendency to hesitate using the service for fear of depleting their credit balance.
Critics point out that granular usage-based pricing often leads users to question the value of each interaction. Others note that a credit system adds unnecessary complexity, comparing it to managing arcade tokens. Some worry about the possibility of unused credits expiring at the end of each month.
The X community has responded with several alternative suggestions for OpenAI to consider. Many users advocate for keeping the current pricing tiers while adding a top-up option for those who need additional usage. Others suggest creating a new mid-range subscription tier priced between $50 and $70 to bridge the gap between existing plans.
While some users propose eliminating free access to dedicate more GPU resources to paying customers, this approach could hinder ChatGPT's growth and conversion potential. Plus and Pro subscribers currently generate the majority of OpenAI's revenue, but maintaining a free tier remains important for user acquisition.
Original article from January 6, 2025
OpenAI considers usage-based pricing for ChatGPT
OpenAI is exploring usage-based pricing for ChatGPT, according to CEO Sam Altman.
In a recent Bloomberg interview, Altman admitted that ChatGPT's current pricing strategy isn't exactly sophisticated. When they first launched paid tiers, they just tested two price points - $20 and $42 monthly. Users balked at $42 but seemed happy with $20, so that's what they went with. No focus groups, no market research - just a gut call made in late 2022.
Now OpenAI is considering a more flexible approach. "A lot of customers are telling us they want usage-based pricing," Altman explained. "Some months I might need to spend $1,000 on compute, some months I want to spend very little." He specifically ruled out time-based billing though, calling that an AOL-era relic they want to avoid.
This shift makes sense given OpenAI's new "o" models, where more computing power can lead to better results - and higher operating costs. The company has already moved into the premium segment with ChatGPT Pro, which uses a more capable o-model and provides increased computing power access for $200 monthly - about ten times the regular premium subscription.
Despite the premium pricing, Altman today revealed on X that ChatGPT Pro is actually losing money. He admits he personally set the price thinking it would be profitable for OpenAI - a miscalculation that likely adds urgency to their pricing strategy revision, given how capital-intensive the business is.
AGI and Superintelligence remain core goals
Despite all this talk about pricing and products, Altman insists OpenAI hasn't lost sight of its ultimate goal: building Artificial General Intelligence (AGI) and superintelligence (ASI).
Currently, OpenAI's research team works from a separate building a few miles from the rest of the company - though Altman says this was just a logistical space-planning decision. They plan to eventually bring everyone together on one campus, where research will still have its own dedicated area. "Protecting the core of research is really critical to what we do," Altman explains.
What counts as AGI? According to Altman, it's when AI can replace highly skilled human workers. "If you could hire AI as a remote employee to be a great software engineer, I think a lot of people would say, 'OK, that's AGI-ish,'" he says. He thinks we could see something like that within four years.
But Altman admits "AGI" has become a fuzzy term. Questions about autonomy remain unanswered, and the goalposts keep moving as AI advances. This is why OpenAI has begun discussing AI development in terms of specific levels to better represent progress. According to Altman, one potential indicator of superintelligence (ASI) would be AI's ability to accelerate scientific progress.
Over the past year, OpenAI has faced significant internal criticism for allegedly lax safety precautions, particularly from its AGI and ASI teams. The company currently has three safety oversight bodies: the Safety Advisory Group (SAG) for technical studies, a board-level Safety and Security Committee (SSC), and a joint Deployment Safety Board (DSB) with Microsoft. Altman says that this three-tiered structure creates confusion within the company, and they're working on streamlining it.