Content
summary Summary

A new study suggests that even gradual advances in AI could steadily erode human control over society's core systems, with potentially life-threatening consequences.

Ad

A team of researchers led by Jan Kulveit from Charles University in Prague and Raymond Douglas from Telic Research warns of a lesser-known existential risk from artificial intelligence: "gradual disempowerment," where humanity slowly loses influence through incremental AI development.

The team argues that even small improvements in AI capabilities can undermine human influence over major systems like the economy, culture, and nation-states. This differs from commonly discussed scenarios where AI systems suddenly take control.

According to the researchers, as AI systems increasingly replace human labor and cognition in these areas, they weaken both explicit control mechanisms like elections and consumer decisions, as well as the implicit alignment with human interests that comes from social systems depending on human participation. Put simply: When these systems rely less on human input, people lose both direct and indirect ways to influence them.

Ad
Ad

The researchers say AI systems might also optimize for outcomes that don't match human preferences more aggressively. These effects could amplify each other across different domains: Economic power shapes cultural narratives and political decisions, while cultural shifts alter economic and political behavior.

The study suggests this dynamic could lead to an effectively irreversible loss of human influence over crucial societal systems, resulting in an existential catastrophe through humanity's permanent disempowerment. People would lose control over vital systems with no way to regain it.

AI as a unique disruptor in the economy, culture, and nation-states

The researchers detail potential effects of AI across three key areas, showing what relative and absolute disempowerment might look like.

Unlike earlier technologies, AI could replace human cognition in almost every economic sector. Market pressures, AI's scalability advantages, and regulatory gaps create strong incentives for adoption. This could lead to relative disempowerment, where people lose economic influence despite overall prosperity, or absolute disempowerment, where people struggle to meet basic needs in an AI-optimized economy.

In culture, AI could be the first technology to replace human cognition across all cultural roles. Greater availability of AI-generated content and interactions, lack of "cultural antibodies" to handle AI's effects, and network effects would speed up AI adoption. In relative disempowerment, human culture might flourish but become marginalized. Absolute disempowerment could occur if cultural evolution outpaces human understanding and agency.

Recommendation

In nation-states, AI could reduce dependence on human involvement. Geopolitical competition, administrative efficiency, and desires for more control would drive AI adoption. In relative disempowerment, states might appear efficient while citizens lose influence. Absolute disempowerment could happen if states become self-perpetuating entities prioritizing their own power over serving people.

Team proposes risk mitigation approaches

The researchers propose several approaches to address these risks:

  • Developing metrics to measure human disempowerment in business, culture, and politics
  • Creating measures to prevent excessive AI influence through regulation, taxation and cultural norms
  • Strengthening human influence through improved democratic processes, better public understanding of AI systems, and AI delegates that represent human interests
  • Conducting basic research on system-wide alignment of complex socio-technical systems

The study emphasizes the need for interdisciplinary approaches and international coordination as incentives for AI adoption increase, because, according to the authors, humanity's future depends not just on preventing AI systems from pursuing overtly hostile goals, but also on ensuring our basic social systems continue to be meaningfully guided by human values and preferences. This, they argue, requires a deeper understanding of what it means for humans to maintain real influence in an increasingly automated world.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • A new study warns that gradual advances in AI could lead to "gradual disempowerment," where humanity slowly loses influence over core societal systems like the economy, culture, and government, potentially resulting in an existential catastrophe.
  • As AI systems increasingly replace human labor and cognition in these areas, they weaken both explicit control mechanisms like elections and consumer decisions, as well as the implicit alignment with human interests that comes from social systems depending on human participation.
  • The researchers propose several approaches to mitigate these risks, including developing metrics to measure human disempowerment, creating measures to prevent excessive AI influence, strengthening human influence through improved democratic processes and AI delegates that represent human interests, and conducting research on system-wide alignment of complex socio-technical systems.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.