According to internal documents obtained by TechCrunch, Google has been benchmarking its Gemini AI model against Anthropic's Claude. Google contractors are given up to 30 minutes per prompt to evaluate which model produces better outputs, focusing on criteria like truthfulness and comprehensiveness. Claude tends to be more safety-conscious in its answers compared to Gemini, according to Techcrunch. A Google DeepMind spokesperson confirmed that they do compare results across models, but stressed that they don't use Anthropic's models to directly improve Gemini, which would go against Anthropic's ToS. Also, this kind of competitive benchmarking is common in the AI industry - companies regularly benchmark their models against competitors to understand where they stand. Moreover, Google is an investor in Anthropic.
Ad
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Sources
News, tests and reports about VR, AR and MIXED Reality.
When will Meta Quest 4 be released and what could it offer?
Meta Quest: Wooorld now offers an immersive Google Earth VR experience
Hands-On: Walk the Plank is Richie's Plank Experience with more realistic graphics
MIXED-NEWS.com
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.