Content
summary Summary

OpenAI co-founder Greg Brockman spoke with French President Emmanuel Macron about the opportunities and risks of artificial intelligence. He addressed three main points: regulation, deployment, and predictions.

Brockman highlighted the potential of AI models such as GPT-4 to revolutionize various fields such as medicine, law, and agriculture. He cited personal experience and recent studies that show AI can improve the quality of work and reduce costs.

At the same time, he emphasized the importance of safety and regulation in the development of AI, comparing it to cryptography. The dual-use scenario also exists here, i.e., the possibility of using the technology for both good and bad purposes.

For example, an AI like GPT-4 can answer questions or translate languages. But it could also be used for things like inventing fake news or creating fake images.

Ad
Ad

Brockman suggested that the most capable and advanced AI models, which he calls Frontier AI models, should be subject to special oversight. However, this should not be at the expense of other models, especially the open-source community.

OpenAI and other large AI companies are sometimes accused of "regulatory capture," that is, using political regulation to create insurmountable hurdles for smaller and open-source developers to protect their own business model. Brockman indirectly addresses these accusations here.

Iterative deployment is the best safety measure

According to Brockman, OpenAI's most important safety decision is the iterative development approach it has used since GPT-2.

This approach involves gradually releasing better versions of the AI technology, either directly by releasing model versions or through the API or an interface like ChatGPT, gathering feedback on safety and usefulness, and involving the world in the development process.

This is the only way to create an understanding of the potential and benefits, Brockman believes. "So I’ve accepted that the news headlines will be about fear, but only when people use the technology will they really understand why we want it."

Recommendation

As a positive example, Brockman cites OpenAI's iterative approach that led to the release of ChatGPT, which sparked a global conversation about AI and AI safety. In this context, Brockman emphasizes the importance of democratic input in the development process.

Predicting AI capabilities

Brockman acknowledged that the history of AI is littered with incorrect expert predictions. OpenAI itself has changed course with each GPT model, and he expects GPT-5 to differ from previous models in ways that cannot yet be predicted.

More investment in science, including scientific predictions, rigorously defined safety and performance assessments, and objective measurements, is needed to guide the development, deployment, and regulation of frontier AI models in the future.

Brockman particularly highlights OpenAI's prediction of GPT-4's key capabilities before the model was fully trained. "We are increasingly learning to see around the corner! We invested very heavily into building a deep learning stack that scales predictably."

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.

But most important, he says, is talking and working with many stakeholders from government, civil society, industry, and academia. "The world continues to be smarter than any of us individually."

Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • OpenAI co-founder Greg Brockman emphasizes the potential of AI models like GPT-4 to revolutionize fields such as medicine, law, and agriculture, and the importance of safety and regulation in AI development.
  • Brockman suggests special oversight of Frontier AI models without compromising the open-source community or smaller players. He emphasizes iterative development as an important safety measure.
  • He calls for increased investment in science, including scientific predictions and objective measurements, to guide the development, deployment, and regulation of frontier AI models in the future.
Sources
Online journalist Matthias is the co-founder and publisher of THE DECODER. He believes that artificial intelligence will fundamentally change the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.