Content
summary Summary

Following in the footsteps of Anthropic CEO Dario Amodei, OpenAI CEO Sam Altman has now also published a blog post about Artificial General Intelligence (AGI). While Altman's post is characteristically technical and analytical, it unfortunately lacks the visionary scope of Amodei's earlier piece.

Ad

In October of last year, Amodei released his essay "Machines of Loving Grace," in which the Anthropic CEO laid out his comprehensive vision for the future. The text was extensive, speculative, and interdisciplinary, addressing not only technical aspects but also social, political, and existential questions. In contrast, Altman's text is significantly shorter and focuses on quantifiable observations and the economic scaling of AI.

However, the political climate has also changed since Amodei's essay: Trump's second term, tightened restrictions on chip exports, the gigantic Stargate infrastructure project, and the Deepseek stock market crash. Altman's essay can therefore also be seen as a soothing pill for investors: OpenAI is still leading, scaling will continue - and so will the AI gold rush.

Altman sees continued exponential growth

One clear difference: Altman speaks of AGI as usual, while Amodei explicitly rejects the term and instead speaks of "powerful AI." Further differences emerge in the time frames: While Altman focuses on the immediate future and predicts significant changes as early as 2025, Amodei outlines a perspective of five to ten years after reaching "powerful AI" - which he considers possible by 2026. Both agree that development is proceeding exponentially.

Ad
Ad

To substantiate this thesis, the OpenAI chief explains three fundamental economic principles that are driving the rapid progress and exponential spread of AI technologies:

  1. The intelligence of an AI model roughly equals the log of the resources used to train and run it. These resources are chiefly training compute, data, and inference compute. It appears that you can spend arbitrary amounts of money and get continuous and predictable gains; the scaling laws that predict this are accurate over many orders of magnitude.
  2. The cost to use a given level of AI falls about 10x every 12 months, and lower prices lead to much more use. You can see this in the token cost from GPT-4 in early 2023 to GPT-4o in mid-2024, where the price per token dropped about 150x in that time period. Moore’s law changed the world at 2x every 18 months; this is unbelievably stronger.
  3. The socioeconomic value of linearly increasing intelligence is super-exponential in nature. A consequence of this is that we see no reason for exponentially increasing investment to stop in the near future.

Technically, the path is clear, according to Altman. But public policy and collective opinion about how AI should be integrated into society are critical.

2035: Unlimited genius for all

Both CEOs share the concern about growing inequality due to AI. Altman sees the danger of an imbalance between capital and labor and proposes a "compute budget" for all people. Amodei devotes an entire chapter to the topic and calls for massive efforts to distribute the benefits of AI equitably.

Altman believes that with progress toward AI, the trend should be toward individual empowerment. Otherwise, there is a risk that authoritarian governments will use AI to control the population through mass surveillance and loss of autonomy.

His goal: "nyone in 2035 should be able to marshall the intellectual capacity equivalent to everyone in 2025; everyone should have access to unlimited genius to direct however they can imagine. There is a great deal of talent right now without the resources to fully express itself, and if we change that, the resulting creative output of the world will lead to tremendous benefits for us all."

Recommendation

To give society and technology time to evolve together, he writes, OpenAI deliberately releases its products early and often. He also hints that OpenAI may become more open again in the future: "Many of us expect to need to give people more control over the technology than we have historically, including open-sourcing more, and accept that there is a balance between safety and individual empowerment that will require trade-offs." In fact, he just recently asked the public (or some form of it) on X what the next open source project of OpenAI should be.

According to Altman, this brings risks that are worth taking: "While we never want to be reckless and there will likely be some major decisions and limitations related to AGI safety that will be unpopular, directionally, as we get closer to achieving AGI, we believe that trending more towards individual empowerment is important."

Individualism, collectivism, and AGI

Altman's technocratic-pragmatic response to the economic (and broader societal) challenges of AGI thus seems to be a stronger individualism, a - also politically enabled - broad access to "unlimited genius," as available as electricity, and thus implicitly also the hope for an individualistically shaped problem-solving approach. Even if it is not clear whether this dream will actually become reality in view of OpenAI's ever-stronger entanglement with U.S. government institutions and increasing regulation worldwide, the vision sounds optimistic and inspiring. However, it ignores essential challenges.

Intellectual abilities cannot automatically be translated into societal benefit. Ideas frequently encounter resistance in political, economic, and cultural institutions. The mere availability of "genius" does not guarantee a just transformation of societal structures. Political measures are therefore likely necessary to ensure that the benefits of AGI are widely distributed. For power and knowledge imbalances can manifest themselves even when nearly everyone has access to enormous intellectual capacities, as long as structural inequalities are not addressed. Sustainable societal change thus requires not only technical innovations, but also a confrontation with questions of justice and resource distribution.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

Amodei, on the other hand, discusses the necessity of a fundamental reorganization of the economy - his proposals are rather progressive, with traces of transhumanism, technocracy, utilitarianism, and some egalitarian thoughts. Government interventions and international cooperation play a central role in Amodei's essay, for example through measures such as an unconditional basic income or alternative models of resource distribution when traditional work models become obsolete due to AI. Another focus is on the development of the global economy, especially in poorer regions, so that not only rich countries benefit from technological progress.

For example, AI-assisted optimization of health care, education, and infrastructure in developing countries could enable rapid catch-up and reduce global inequality, he suggests. But policies are also needed to mitigate negative economic side effects. AI systems could help overcome inefficient or corrupt structures and promote a fair distribution of new resources.

For Amodei, this is not only a question of the successful implementation of the introduction of AI - it is also central to the preservation of the democratic order: "Unfortunately, I see no strong reason to believe AI will preferentially or structurally advance democracy and peace," says the Anthropic CEO. "Human conflict is adversarial," he writes, "and AI can in principle help both the 'good guys' and the 'bad guys'."  On the contrary, he thinks some structural factors are worrisome: AI is likely to enable better propaganda and surveillance - both important tools in the arsenal of autocrats. "It’s therefore up to us as individual actors to tilt things in the right direction: if we want AI to favor democracy and individual rights, we are going to have to fight for that outcome," Amodei said.

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Sam Altman, CEO of OpenAI, has published a blog post on AGI that focuses more on quantifiable observations and economic scaling of AI compared to an earlier post by Anthropic CEO Dario Amodei.
  • Altman continues to see exponential growth in AI development and explains three economic laws driving this progress: logarithmic intelligence growth through resource utilization, rapidly declining cost of AI power, and super-exponential value creation through linear intelligence growth.
  • Altman's vision for 2035 is that everyone should have access to "unlimited genius" to avoid inequality. He argues for more individual empowerment through AI, while Amodei emphasizes collectivist approaches such as an unconditional basic income and international cooperation to fairly distribute the benefits of AI.
Max is the managing editor of THE DECODER, bringing his background in philosophy to explore questions of consciousness and whether machines truly think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.