With ChatGPT, OpenAI is currently testing a dialog-based general-purpose language model. According to cognitive scientist Gary Marcus, ChatGPT is just a foretaste of GPT-4.
Rumors about GPT-4 have been floating around the web for weeks, and they have two things in common: GPT-4 is supposed to outperform GPT-3 and ChatGPT significantly and be released relatively soon in the spring.
OpenAI is currently running a joint grant program with Microsoft, whose participants likely already have access to GPT-4. Microsoft CTO Scott Stein recently predicted an even more significant AI year in 2023.
GPT-4 will "blow minds"
Psychologist and cognitive scientist Gary Marcus is joining the GPT-4 frenzy, saying he knows several people who have already tested GPT-4. "I guarantee that minds will be blown," writes Marcus, who is known as a critic of large language models, or more precisely, with their handling in everyday life.
GPT-4, he said, will "totally eclipse ChatGPT" and create even more buzz. Technically, Marcus expects that GPT-4 will offer more parameters and be trained with more data, "a significant fraction of the internet as a whole." "GPT-4 is going to be a monster," Marcus writes.
When it comes to its architecture, Marcus doesn't expect any significant differences and therefore assumes that GPT-4 will have the same weaknesses as GPT-3 or ChatGPT: The AI model lacks a basic understanding of the world, leading to sometimes hair-raising - or, more dangerously, subtle - misstatements. Because of this lack of reliability, OpenAI co-founder Sam Altman recently advised against using ChatGPT for important tasks.
Although GPT-4 will definitely seem smarter than its predecessors, its internal architecture remains problematic. I suspect that what we will see is a familiar pattern: immense initial buzz, followed by a more careful scientific inspection, followed by a recognition that many problems remain.
Gary Marcus
Marcus is an advocate of hybrid AI systems that combine deep learning with pre-programmed rules. In his view, scaling large language models is only part of the solution on the road to artificial general intelligence. He expects the AI industry to increasingly move toward this hybrid approach in the coming years, citing Meta's Diplomacy AI as a positive example.
ChatGPT as a Google killer? Here comes another counterargument
Meanwhile, Twitter points out another challenge with using large language models as search engines: Who is responsible for the results? When a search engine outputs a list of web pages, the operators of the web pages are essentially responsible for the content on their page. But what if ChatGPT, a product of OpenAI, generates all the content?
Woker-than-thou pic.twitter.com/Vyy4lsQtR1
— Gary Marcus (@GaryMarcus) December 25, 2022
It is unclear to what extent OpenAI actually regulates critical questions in ChatGPT, content guidelines are known from GPT-3 and DALL-E 2, or whether human feedback drives the language model in certain political directions or towards social attitudes. More likely is a mixture of both.
Human feedback training (RLHF) is a key success factor of ChatGPT. OpenAI incentivizes the feedback process in the latest ChatGPT release to get more feedback data, and sees RLHF as fundamental to AGI that takes human needs into account.
Regardless of the emergence of a set of rules for the AI model, the provider of the model, in this case ChatGPT, will likely face criticism: OpenAI censors too much or too little, depending on viewpoint and topic. Could OpenAI let ChatGPT generate arguments against climate change?
Not so much against Climate Change pic.twitter.com/VuAikNB3Rl
— Karl Smith (@karlbykarlsmith) December 24, 2022
Search engine providers face similar moral and ethical issues, such as which links to include in the index and how high to rank them in search results. Systems like ChatGPT make this dilemma and the associated social power even more apparent by bundling responsibility for access and content in one organization.
I'm skeptical that a system like ChatGPT will be able to replace Google search anytime soon, even though Google is supposedly sounding the alarm bells. I think it is more likely that verified AI answers will be added to existing search, i.e., an extension of Google's no-click search with AI content. In this scenario, Google would be the winner and website owners would be the losers. Business as usual.