Content
summary Summary

The Future of Humanity Institute (FHI) at the University of Oxford, one of the world's leading think tanks on topics such as existential risks and the future of artificial intelligence, is shutting down after almost 20 years of operation. According to a final report by long-time researcher Anders Sandberg, this decision was made due to bureaucratic hurdles.

Founded in 2005 by Professor Nick Bostrom, the institute brought together researchers from various disciplines. The goal was to anticipate technological developments that could fundamentally change human existence.

The founding was made possible primarily through a generous donation from entrepreneur James Martin. The Oxford Uehiro Centre for Practical Ethics also supported the institute in its early days. In the following years, FHI built an interdisciplinary network and established itself as a thought leader in its core topics.

Superintelligence, biosecurity, and a vulnerable world

One of the main research areas was global catastrophic risks and existential risks, i.e. threats that could lead to the extinction of humanity. Many concepts that are widespread today, such as the "Vulnerable World" hypothesis, go back to FHI's work. Biosecurity and pandemic preparedness were also a focus.

Ad
Ad

The institute placed a special early focus on the future of artificial intelligence. Nick Bostrom's 2014 book "Superintelligence" sparked a worldwide debate about the risks of advanced AI. FHI subsequently built an AI governance program that dealt with regulatory issues. Other topics included the safety and controllability of AI systems.

Other lines of research dealt with whole brain emulation, "longtermism," digital consciousnesses, and the search for extraterrestrial intelligence. Applied epistemology and decision theory under uncertainty were also part of the portfolio.

Future of Humanity Institute closes, but its ideas live on

Many of FHI's ideas and concepts found their way into culture and politics. For example, they advised the British Parliament and the United Nations. Some spin-offs like the Centre for the Governance of AI became independent.

Despite successes, the institute was increasingly confronted with bureaucratic hurdles, according to Sandberg. Fundraising and hiring of new staff were blocked. In 2023, the faculty decided not to renew expiring contracts. On April 16, 2024, FHI was officially closed.

Existential risk: university politics

To prevent the closure, more investment in university politics and social relationships may have been needed to build a lasting and stable relationship with the faculty, the final report states. Better communication and cooperation with the surrounding academic community could have avoided misunderstandings and fostered stronger support for FHI within the university, Sandberg said.

Recommendation

The institute's legacy lives on in the many researchers and organizations it has inspired, Sandberg said. Whether there will be a successor project is an open question. The key, FHI members say, is to focus on the really important questions for humanity - and to find answers that make a difference.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • The Future of Humanity Institute (FHI) at Oxford University, a think tank on topics such as existential risks and the future of AI, is shutting down after nearly 20 years. Bureaucratic hurdles are cited as the reason.
  • Founded in 2005, the FHI brought together researchers from various disciplines to anticipate technological developments that could fundamentally change human existence. Research areas included global risks, biosecurity, AI governance, and security.
  • Despite successes and advice from politicians and the UN, the institute faced increasing obstacles in fundraising and recruitment. According to researcher Anders Sandberg, more should have been invested in university policy and relations. FHI's legacy lives on in the many inspired researchers and organizations.
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.