OpenAI co-founder Sam Altman warns against using ChatGPT for important topics.
Even faster and more intense than image generators, AI technology has reached the mainstream through ChatGPT. On Twitter, #ChatGPT is trending, and countless small and large media are reporting on the sometimes startling answers and texts of OpenAI's latest text model. Some even credit ChatGPT with the breakthrough of artificial intelligence, the dawn of a new computing age.
The enormous response proves above all: training with human feedback works. The fact that ChatGPT is so well received probably has less to do with the output text or code, which is qualitatively on par with what GPT-3.5 has been producing since early 2022 and GPT-3 since 2020.
The amazement over ChatGPT has much more to do with its interface: The AI, thanks to its optimization for dialog through human feedback, almost always seems to understand its chat partners. Even with sloppily written instructions, the system has good answers ready. That is new.
ChatGPT pushes boundaries but doesn't cross them
Some already see a turning point for Google's quasi-monopoly on search engines. Instead of searching for a term and then rattling off websites, you can simply let ChatGPT generate answers to your questions.
But that scenario is likely to be further in the future, if it occurs at all: ChatGPT may be pushing the limits of large language models, but it's not crossing them yet.
ChatGPT continues to suffer from the fundamental problems of current language models:
- They invent facts,
- but present them with great confidence in polished words,
- they offer no assessment of the reliability of the information
- and no source transparency.
- Moreover, large language models can reinforce prejudices and create new ones.
There are concepts on how to solve these problems, such as giving LLMs access to the Internet, but no solutions yet.
Google, for its part, introduced its own large language models such as PaLM - LaMDA for dialogs - months ago but has not yet brought them to market, partly because of the security concerns mentioned above. Of course, Google has more questions to answer about its own business model, which is based on search engine advertising, it would have to reinvent itself in parts - and therefore cannot rush ahead like OpenAI.
The scientific community also protested Meta's Galactica scientific language model due to security and reliability concerns, and code platform Stack Overflow had enough of ChatGPT's nonsense within days.
OpenAI co-founder Sam Altman hits the brakes
OpenAI co-founder Sam Altman knows about ChatGPT's weaknesses - and explicitly highlights them on Twitter. This may in part be (false) modesty or clever expectation management, but it is also the truth.
ChatGPT is "incredibly limited," Altman writes, but "good enough at some things to create a misleading impression of greatness."
Currently, he says, it's a mistake to use ChatGPT for important tasks. The system is a glimpse of progress; in terms of robustness and reliability, there is still much work to be done, Altman writes.
For creative inspiration, ChatGPT is great, but not for reliable answers to factual questions, he said. "We will work hard to improve!" Altman writes.
More interesting than ChatGPT, which is still based on GPT-3, will be the leap in quality with GPT-4 next year. Rumor has it that the system will be introduced in the first quarter of 2023.
Microsoft CTO Scott Stein promised a few days ago that 2023 would be the "most exciting AI year ever." Microsoft is one of OpenAI's biggest investors, so Stein may already have an idea of what GPT-4 is capable of.