Less work, equal pay: OpenAI lays out its vision for a world reshaped by superintelligence
Key Points
- OpenAI has published a twelve-page policy paper outlining political measures it believes are necessary to manage the transition to superintelligence, including proposals for redistributing AI-driven economic gains.
- Among the key recommendations are a sovereign wealth fund that channels proceeds from AI growth directly to citizens, higher capital gains taxes for top earners, corporate taxes on AI earnings, and pilot programs for a four-day workweek with no loss of pay.
- The paper also argues that access to AI should be treated as fundamental to economic participation, on par with literacy or electricity, signaling OpenAI's push to shape the policy debate around advanced AI systems.
In a new policy paper, OpenAI lays out how governments should prepare for superintelligence, with proposals that include a public wealth fund, a four-day workweek, and higher capital gains taxes for top earners.
OpenAI has published a twelve-page document titled "Industrial Policy for the Intelligence Age". The company outlines policy proposals designed to make sure the transition to superintelligence "benefits everyone." The ideas are "intentionally early and exploratory," not a ready-made list of demands, the company says.
OpenAI defines superintelligence as AI systems "capable of outperforming the smartest humans even when AI assists them." And the company says, "It's already underway:" Frontier systems have gone from handling tasks that take humans minutes to ones that take hours. If progress keeps up, systems will soon tackle projects that currently take months.
OpenAI puts its proposals in historical context, comparing the coming shift to the Progressive Era and the New Deal, both of which rewrote the social contract after industrialization. The difference this time, the company says, is speed: "The choices we make in the near term will shape how its benefits and risks are distributed for decades to come."
The paper warns of job losses, abuse, loss of control, and concentration of power. OpenAI calls itself out as a potential beneficiary, writing that "There is also a risk that the economic gains concentrate within a small number of firms like OpenAI, even as the technology itself becomes more powerful and widely used." The document focuses mostly on the U.S. but stresses that "the conversation—and the solutions—must ultimately be global."
Axios scored an in-depth interview with OpenAI CEO Sam Altman discussing the policy proposals, which you can watch in the video below.
A national wealth fund, new taxes, and efficiency dividends
OpenAI wants to create a "Public Wealth Fund" that gives every citizen a stake in AI-driven economic growth. The fund would invest in diversified, long-term assets covering AI companies and the broader economy.
Returns would go directly to citizens, "regardless of their starting wealth or access to capital." The paper doesn't say how the fund would be financed: that's something policymakers and AI companies would need to work out together.
On the tax side, OpenAI wants to update the tax base so programs like Social Security and healthcare stay funded long-term. The paper calls for "higher taxes on capital gains at the top," corporate taxes on "sustained AI-driven returns," and "taxes related to automated labor." Companies that keep and train workers would get wage-linked incentives.
The paper gets specific on working hours. Employers and unions should test a 32-hour or four-day workweek at full pay in temporary pilot programs. If productivity holds up, the shorter week should become permanent. If AI cuts operating costs, companies should put more into pensions, healthcare, and childcare.
Workers should have a say in how AI gets deployed
Employees should get a formal role in deciding how AI shows up in the workplace, OpenAI says. They know best how their work actually gets done and should help pick where AI is used first, like in dangerous, repetitive, or physically demanding tasks. AI should not pile on more work, limit autonomy, or undercut fair pay, the paper states.
If major labor market disruptions hit, the paper lays out a support package: more flexible unemployment benefits, rapid cash assistance, and training vouchers. These would kick in automatically when certain warning indicators cross set thresholds and sunset when things stabilize.
Anyone who loses their job to AI should be able to find work in the care economy: childcare, elder care, education, healthcare, and community services. AI could cut the paperwork in these fields, but human connection stays central. With a "family benefit," OpenAI wants to treat care work as economically valuable, something people can combine with part-time work, continuing education, or starting a business.
AI access, startup support, and infrastructure
OpenAI argues that AI access should become "similar to mass efforts to increase global literacy, or to make sure that electricity and the internet reach remote parts of the globe." A basic level of AI literacy needs to be widely available, including free or low-cost options.
Anyone looking to start a business should get "startup-in-a-box" packages: micro-grants, model contracts, and shared infrastructure. Worker organizations could act as go-betweens, setting up training and helping with contract negotiations.
As an immediate step, OpenAI calls for expanding energy infrastructure. AI data centers should "pay their own way on energy so that households aren't subsidizing them" and create local jobs and tax revenue. New public-private partnerships should help clear funding gaps, permitting backlogs, and siting risks for high-voltage power lines. These deals should be set up so taxpayers are protected from commercial losses and the new infrastructure brings down energy costs for households.
The paper also calls for a distributed network of AI-powered labs to ramp up the capacity for testing AI-generated hypotheses. These labs would plug AI directly into experimental workflows and speed up the cycle between hypothesis generation and testing. This infrastructure should be spread across universities, community colleges, hospitals, and regional research centers, "not concentrated in a small number of elite institutions."
Security in a superintelligence world
The second half of the paper turns to societal resilience. OpenAI wants research and development of tools to guard against misuse in high-risk areas like cyber and biorisks.
Advanced AI systems should be put to work on threat modeling, red-teaming, and robustness testing. Beyond that, the paper calls for complementary defense systems, for example, tools that can quickly identify and produce medical countermeasures during outbreaks. Procurement, standards, and insurance frameworks should create "competitive safety markets" where defenses improve as fast as the threats they're meant to stop.
OpenAI also proposes an "AI trust stack:" systems for verifying and tracking the origin of AI-generated content and actions that build trust without enabling blanket surveillance. For the most powerful models, the paper wants targeted audit requirements, especially if those models could materially advance "chemical, biological, radiological, nuclear, or cyber risks." These rules should apply "only to a small number of companies and the most advanced models" so they don't restrict broad access to general AI.
When dangerous AI systems are already out in the wild, because model weights got published or systems started replicating on their own, OpenAI calls for "Model-containment playbooks." These would lay out coordinated containment steps, similar to incident response plans in cybersecurity or public health.
The paper also floats a reporting system for companies to share information about incidents, misuse, and near-misses with a designated authority. The goal here is learning and prevention, not punishment.
Cases where models showed "concerning internal reasoning, unexpected capabilities, or other warning signals" without causing harm should also be reported "so the ecosystem can learn from close calls before they become real incidents."
Stricter rules for both governments and companies
Frontier AI companies should adopt governance structures that "embed public-interest accountability into decision-making," the paper argues, for example, by organizing as public benefit corporations with "explicit commitments" to share the benefits of AI widely, including "significant, long-term philanthropic or charitable giving."
OpenAI also wants frontier systems locked down against "corporate or insider capture," through protecting model weights and training infrastructure, auditing for manipulative behavior or "hidden loyalties" in models, and monitoring high-risk deployments "so no individual or internal faction can quietly use AI systems to concentrate power."
For governments, OpenAI calls for clear rules on AI use with "especially high standards for reliability, alignment, and safety." At the same time, AI should boost democratic accountability: AI-assisted workflows in government agencies would create clearer digital records that oversight bodies could review with AI auditing tools.
Specifically, OpenAI wants to modernize transparency frameworks like the Freedom of Information Act so citizens and watchdog groups can use AI to dig into specific questions about government actions. This should also settle when AI interaction logs and records of agency actions qualify as federal records that must be kept for set periods.
To make sure alignment isn't "defined only by engineers or executives behind closed doors," the paper calls for structured ways for the public to weigh in. Developers should publish model specs describing how systems should behave. Governments should root these standards in democratic values and set up mechanisms for representative public input.
Internationally, OpenAI proposes a global network of AI institutes that would work together through shared protocols for information sharing, joint evaluations, and coordinated responses. Over time, this network could grow into an international framework on par with other multilateral bodies for security and standards.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now