OpenAI CEO Sam Altman has clear words on the GPT-4 hype. He advises the education system against relying on AI text detectors.
Altman calls the rumor mill surrounding GPT-4 a "ridiculous thing." Rumors of gigantic model sizes are circulating on Twitter and the like, and Altman has "no idea where it's all coming from."
OpenAI co-founder and chief scientist Ilya Sutskever gets more graphic, posting on Twitter a parody of the image that has been circulating for months, showing a supposedly gigantic GPT-4 model next to a small GPT-3 model. It is supposed to suggest an enormous leap in performance. However, the parameters are taken out of the air or "complete bullshit", as Altman calls it.
— Ilya Sutskever (@ilyasut) January 20, 2023
GPT-4 is going to launch soon.
And it will make ChatGPT look like a toy...→ GPT-3 has 175 billion parameters
→ GPT-4 has 100 trillion parametersI think we're gonna see something absolutely mindblowing this time!
And the best part? 👇 pic.twitter.com/FAB5gFjveb
— Simon Høiberg (@SimonHoiberg) January 11, 2023
Moreover, these hype posts only refer to the number of parameters of the AI model to describe its performance. Altman already hinted in September 2021 that GPT-4 might differ from GPT-3 by efficiency and data quality rather than by the mere number of parameters.
Models such as Chinchilla or Sparse Luminous Base show that language systems with fewer parameters but a more efficient architecture and trained on more data can perform well.
"People are begging to be disappointed - and they will be," Altman says of the possible expectation that OpenAI's GPT-4 already has the capabilities of a general AI.
Altman believes in value diversity in language models
OpenAI and many other companies are very concerned about the safety of large language models, including the correctness of the information and its values. For example, they should not generate hate speech.
Altman believes that in the future there will be a variety of such models with different values - from completely safe to more "eccentric" and "creative". The next step, according to Altman, would be for users to give the AI system specifications on how it should behave and what values it should take.
Altman would not comment further on Google's statements that chatbots are not yet safe enough to be widely deployed. However, he said he hopes journalists will call Google out on this statement if the company launches a product anyway. The search company is reportedly planning chatbot search with a focus on security and up to 20 AI products in 2023.
Education system should not rely on AI text detectors
Language models such as ChatGPT or GPT-3 provide students with easy-to-use homework and essay automation. The tools enable them to write faster. They can even generate texts completely automatically.
The technology is therefore controversial in education - should it be encouraged to be used and learners empowered, or would AI systems be better banned?
"I get why educators feel the way they feel about this. […] We are going to try to do some things in the short term and there may be ways we can help teachers or anyone be like a little bit more likely to detect output of a GTP-like system," Altman says.
However, Altman added, society should not rely on these solutions in the long term, as a "determined person" would find ways to circumvent these detectors. According to Altman, they are important for the transition phase, but it is impossible to develop perfect detectors.
OpenAI is working on watermarking technology for its language models. But such systems could become irrelevant within a few months of release, Altman said.
"We live in a new world where we have to adapt to generated text. That's fine," says Altman, drawing a comparison to the advent of calculators. Language models are "more extreme", he says, but also offer "extreme advantages".