Ad
Ad
Ad
Short

According to internal documents obtained by TechCrunch, Google has been benchmarking its Gemini AI model against Anthropic's Claude. Google contractors are given up to 30 minutes per prompt to evaluate which model produces better outputs, focusing on criteria like truthfulness and comprehensiveness. Claude tends to be more safety-conscious in its answers compared to Gemini, according to Techcrunch. A Google DeepMind spokesperson confirmed that they do compare results across models, but stressed that they don't use Anthropic's models to directly improve Gemini, which would go against Anthropic's ToS. Also, this kind of competitive benchmarking is common in the AI industry - companies regularly benchmark their models against competitors to understand where they stand. Moreover, Google is an investor in Anthropic.

Ad
Ad
Ad
Ad
Google News