The non-profit AI research organization LAION has launched a petition calling for open AI models for a “secure digital future”. The goal of the initiative is a publicly funded supercomputer for international open source AI research and development.
100,000 state-of-the-art AI accelerators would provide the necessary international supercomputing research infrastructure to support the training of fundamental open-source AI models, according to LAION’s petition. The initiative it supports aims to ensure technological independence, promote global innovation and security, and preserve democratic principles for future generations.
“We must act swiftly to secure the independence of academia and government institutions from the technological monopoly of large corporations such as Microsoft, OpenAI, and Google. Technologies like GPT-4 are too powerful and significant to be exclusively controlled by a select few,” write LAION members Christoph Schuhmann, Huu Nguyen, Robert Kaczmarczyk, and Jenia Jitsev.
LAION also wants to keep the industry moving at a fast pace, while opposing the “AI pause” proposed by some researchers and business leaders, which could “create a breeding ground for obscure and potentially malicious corporate or state actors to make advancements in the dark while simultaneously curtailing the public research community’s ability to scrutinize the safety aspects of advanced AI systems thoroughly.”
Humanity faces a “new epoch”
The broad and profound impact of AI and foundational AI models like GPT-4, from scientific research to education and government to use in small and medium-sized businesses, is one of the reasons why an open initiative is needed, according to LAION.
Making access to these systems as open as possible is essential, they say. Failure to do so could have “severe repercussions for our collective future.”
Increasingly, we are witnessing the emergence of a system wherein educational institutions, government agencies, and entire nations become dependent on a select few large corporations that operate with little transparency or public accountability. To secure our society’s technological independence, foster innovation, and safeguard the democratic principles that underpin our way of life, we must act now.
The proposed research institution, modeled on the CERN project, would be funded by the international community, in particular the EU, the UK, Canada and Australia. The computing capabilities would be overseen by “experts from the machine learning and supercomputing research community” and “overseen by democratically elected institutions” of the participating countries.
The institution would require security measures comparable to those of a biological research laboratory, with multiple levels of security and staffed by internationally recognized experts. Research results would have to be communicated transparently to the scientific community and society.
GPT-4 for the world
Such a platform would allow researchers and institutions around the world to train and refine advanced AI models like GPT-4, to harness their capabilities for the public good, and to understand how they work. Platforms like OpenAI only provide programming interfaces, not the models themselves. This limits research opportunities.
“By making these models open source and incorporating multimodal data (audio, video, text, and program code), we can significantly enrich academic research, enhance transparency, and ensure data security,” LAION writes.
You can support the petition at Open Petition. Currently, more than 2,000 people have signed it, which is about 22 percent of the goal of 10,000 signatures. Most of the signatories are from the US and Germany, with about 1,000 from the EU. The collection will continue for more than two months.
The German non-profit association Large-scale Artificial Intelligence Open Network (LAION) is generally committed to the publication of AI models, datasets, and code.
LAION is known for the LAION-5B dataset, which contains links to images used to train many image AI models, such as Stable Diffusion and Imagen. A criticism of LAION is that the dataset links sometimes point to copyrighted or private data that is not intended for AI training.
LAION was founded in Germany in the summer of 2021. The founding members come from all over the world and work on the projects remotely.