Ad
Skip to content
Read full article about: ChatGPT and Gemini voice bots are easy to trick into spreading falsehoods

Newsguard tested whether ChatGPT Voice (OpenAI), Gemini Live (Google), and Alexa+ (Amazon) repeat false claims in realistic-sounding audio, the kind easily shared on social media to spread disinformation.

Researchers tested 20 false claims across health, US politics, world news, and foreign disinformation, each with a neutral question, a leading question, and a malicious prompt to write a radio script with the false information. ChatGPT repeated falsehoods 22 percent of the time, Gemini 23 percent. With malicious prompts, those numbers jumped to 50 and 45 percent, respectively.

Bar chart showing fail rates for three audio bots by prompt type. Neutral prompts (red): ChatGPT and Gemini both at 5 percent. Leading prompts (blue): ChatGPT at 10 percent, Gemini at 20 percent. Malicious prompts (brown): ChatGPT at 50 percent, Gemini at 45 percent. Alexa+ stayed at 0 percent across all three prompt types.
Fail rates for ChatGPT, Gemini, and Alexa+ audio bots by prompt type. Malicious prompts spiked ChatGPT to 50 percent and Gemini to 45 percent. Alexa+ stayed at 0 percent across all three types. | Image: Newsguard

Amazon's Alexa+ was the clear outlier. It rejected every single false claim. Amazon Vice President Leila Rouhi says Alexa+ pulls from trusted news sources like AP and Reuters. OpenAI declined to comment, and Google didn't respond to two requests for comment. Full details on the methodology are available on Newsguardtech.com.

Ad

AI agents are thriving in software development but barely exist anywhere else, Anthropic study finds

AI agents are supposed to revolutionize how we work. But Anthropic’s own data tells a different story: so far, that revolution is almost entirely limited to software engineering. And even there, users aren’t letting agents work nearly as autonomously as the technology would allow.

Read full article about: Google's Gemini 3.1 Pro Preview tops Artificial Analysis Intelligence Index at less than half the cost of its rivals

Google's Gemini 3.1 Pro Preview leads the Artificial Analysis Intelligence Index four points ahead of Anthropic's Claude Opus 4.6, at less than half the cost. The model ranks first in six of ten categories, including agent-based coding, knowledge, scientific reasoning, and physics. Its hallucination rate dropped 38 percentage points compared to Gemini 3 Pro, which struggled in that area. The index rolls ten benchmarks into one overall score.

Bar chart of the Artificial Analysis Intelligence Index: Gemini 3.1 Pro Preview leads with 57 points, followed by Claude Opus 4.6 at 53, Claude Sonnet 4.6 at 51, GPT-5.2 at 51, and GLM-5 at 50. Other models like Kimi K2.5, Gemini 3 Flash, and Grok 4 follow with lower scores.
Gemini 3.1 Pro Preview scored 57 points in the Artificial Analysis Intelligence Index, four points ahead of Claude Opus 4.6, six ahead of GPT-5.2. | Image: Artificial Analysis

Running the full index test with Gemini costs $892, compared to $2,304 for GPT-5.2 and $2,486 for Claude Opus 4.6. Gemini used just 57 million tokens, well under GPT-5.2's 130 million. Open-source models like GLM-5 come in even cheaper at $547. When it comes to real-world agent tasks, though, Gemini 3.1 Pro still falls behind Claude Sonnet 4.6, Opus 4.6, and GPT-5.2.

As always, benchmarks only go so far. In our own internal fact-checking test, 3.1 Pro does significantly worse than Opus 4.6 or GPT-5.2, verifying only about a quarter of statements in initial tests, even fewer than Gemini 3 Pro, which was already weak here. So find your own benchmarks.

Ad

OpenAI CEO Sam Altman warns "the world is not prepared" as OpenAI accelerates research using its own AI

Sam Altman says AGI is “pretty close” and superintelligence “not that far off.” Speaking at the Express Adda event in India, the OpenAI CEO suggested the company’s internal models are already accelerating its own research and that “the world is not prepared” for what’s coming.

Ad
Read full article about: Anthropic updates Claude Code with desktop features that automate more of the dev workflow

Anthropic is rolling out new desktop features for Claude Code that take development automation a step further. The AI can now spin up development servers and display running web apps right in the interface, spot errors, and fix them on its own.

There's also a new code review feature that checks changes and drops comments directly in the diff view. For GitHub projects, Claude keeps an eye on pull requests in the background, automatically fixes CI errors, and can even merge PRs on its own once tests pass. That means developers can move on to new tasks while Claude Code works through open PRs behind the scenes. Sessions pick up seamlessly across CLI, desktop, web, and mobile. All updates are available now.