Anthropic unveiled Claude, its first ChatGPT-like consumer AI model. It is designed to obey laws, ensuring safe use.
Anthropic, an AI company founded in 2021, has officially unveiled its first product. The announcement was made at the same time as OpenAI’s GPT-4 presentation, an unfortunate timing that caused the announcement to get a bit lost in the shuffle.
“Claude” is in direct competition with OpenAI’s AI models, but unlike GPT-4, it is not multimodal, so it can only process text input and output text again.
The AI assistant has been integrated into partner applications in recent weeks, including project management tool Notion, question-answering platform Quora, and data-secure search engine DuckDuckGo.
Digital contracts specialist Robin AI has also had the opportunity to test Claude and found him to be “very confident at drafting, summarizing, translations, and explaining complex concepts in simple terms”.
AI platform Scale summarized the performance of ChatGPT and Claude after an intensive comparison:
Overall, Claude is a serious competitor to ChatGPT, with improvements in many areas. While conceived as a demonstration of “constitutional” principles, Claude is not only more inclined to refuse inappropriate requests, but is also more fun than ChatGPT. Claude’s writing is more verbose, but also more naturalistic. Its ability to write coherently about itself, its limitations, and its goals seem to also allow it to more naturally answer questions on other subjects.
On other tasks, such as coding, Claude performed worse and made more errors. On logical reasoning and mathematical computation, Claude and ChatGPT are similar (poor), with OpenAI promising significant progress for GPT-4.
Anthropic does not specify how many parameters Claude’s underlying language model has. However, a recent scientific paper suggests an order of magnitude of 175 billion, which is on par with GPT-3 and GPT-3.5. The number of parameters of GPT-4 is not known, but it is only one of several factors that can represent the quality of an AI model.
RLAI rather than RLHF
When it was founded about two years ago, Anthropic did not yet have a concrete business model, but it still raised a good $700 million. The reason for this leap of faith by investors could be Anthropic’s approach of prioritizing security in the development of its AI models – at least that is what it claims. Further experiments will show to what extent the San Francisco-based startup can keep this promise.
“We’ve trained language models to be better at responding to adversarial questions, without becoming obtuse and saying very little,” Anthropic says.
They relied on a technique called constitutional AI, which also aims to reduce reliance on human feedback. Instead, a custom-built AI was responsible for optimizing Claude’s possible answers according to the laws laid out in the Constitution. According to Anthropic, the AI aims to be as “helpful, honest, and harmless” as possible.
Often, language models trained to be ‘harmless’ have a tendency to become useless in the face of adversarial questions. Constitutional AI lets them respond to questions using a simple set of principles as a guide. pic.twitter.com/oClCVIEmcF
— Anthropic (@AnthropicAI) December 16, 2022
Access to chatbot and API is in closed beta
Currently, Claude is only available in a closed beta version. Anthropic provides access to the AI via both a web interface and an API.
Similar to ChatGPT and ChatGPT Turbo, Anthropic offers two versions of its AI assistant: Claude and Claude Instant, with Instant being a cheaper, faster, but presumably less reliable alternative.
Both variants can handle 9,000 tokens as context. For comparison, ChatGPT (probably) handles 4,096 tokens, GPT-4 in its most powerful version handles up to 32,000 tokens. Unlike OpenAI, Anthropic counts not only output characters, but also prompt characters.
|Prompt/1,000,000 characters||Output/1,000,000 characters|