Ad
Skip to content
Read full article about: Anthropic's new Claude Fast Mode trades your wallet for speed at a steep 6x markup

Anthropic just launched a new fast mode for Claude, and the pricing is steep: the "Fast Mode" for Opus 4.6 costs up to six times the standard rate. In return, Anthropic says the model responds 2.5 times faster at the same quality level. The mode is built for live debugging, rapid code iterations, and time-critical tasks. For longer autonomous runs, batch processing/CI-CD pipelines, and cost-sensitive workloads, Anthropic says you're better off sticking with standard mode.

Standard Fast mode
Input ≤ 200K tokens $5 / MTok $30 / MTok
Input > 200K tokens $10 / MTok $60 / MTok
Output ≤ 200K tokens $25 / MTok $150 / MTok
Output > 200K tokens $37,50 / MTok $225 / MTok

Fast Mode can be toggled on in Claude Code with /fast and works across Cursor, GitHub Copilot, Figma, and Windsurf. There's a 50 percent introductory discount running until February 16. The mode isn't available through Amazon Bedrock, Google Vertex AI, or Microsoft Azure Foundry. Anthropic plans to expand API access down the line, interested developers can sign up for a waiting list.

Read full article about: OpenAI and Anthropic become AI consultants as enterprise customers struggle with agent reliability

Integrating AI agents into enterprise operations takes more than a few ChatGPT accounts. OpenAI is hiring hundreds of engineers for its technical consulting team to customize models with customer data and build AI agents, The Information reports. The company currently has about 60 such engineers plus over 200 in technical support. Anthropic is also working directly with customers.

The problem: AI agents often don't work reliably out of the box. Retailer Fnac tested models from OpenAI and Google for customer support, but the agents kept mixing up serial numbers. The system reportedly only worked after getting help from AI21 Labs.

OpenAI Frontier Architecture
OpenAI's new agentic enterprise platform "Frontier" shows just how complex AI integration can get: the technology needs to connect to existing enterprise systems ("systems of record"), understand business context, and execute and optimize agents—all before users ever touch an interface. | Image: OpenAI

This need for hands-on customization could slow how fast AI providers scale their B2B agent business and raises questions about how quickly tools like Claude Cowork can deliver value in an enterprise context. Model improvements and better reliability on routine tasks could help, but fundamental LLM-based security risks remain.

Japan's lower house election becomes a testing ground for generative AI misinformation

AI-generated fake videos are spreading rapidly across Japanese social media during the lower house election campaign. In a survey, more than half of respondents believed fake news to be true. But Japan is far from the only democracy facing this problem.

Read full article about: OpenAI's UAE deal with G42 shows AI models are cultural products as much as technical tools

OpenAI is working with Abu Dhabi-based G42 on a custom ChatGPT for the UAE, Semafor reports. The version will speak the local Arabic dialect and may include content restrictions. One source said the UAE wants the chatbot to project a political line consistent with the monarchy's. Global ChatGPT will stay available but adapted to local laws, notifying users when content violates regulations. OpenAI is fine-tuning rather than retraining to cut costs.

G42 is led by Sheikh Tahnoon bin Zayed Al Nahyan—the UAE President's brother, National Security Advisor, and head of the largest sovereign wealth fund. The companies have been partners since October 2023.

These adaptations show AI models are cultural products as much as technical tools. Generated content flows into every corner of society, and even small changes to cultural narratives can have lasting effects; which is why both China and the US are working to control their AI models' output to shape domestic conversations and spread their worldviews abroad.

Read full article about: Sam Altman predicts AI agents will integrate any service they want, with or without official APIs

"Every company is an API company now, whether they want to be or not," says OpenAI CEO Sam Altman, repeating a phrase that's stuck with him recently. Altman made the comment while discussing how generative AI could reshape traditional software business models.

AI agents will soon write their own code to access services even without an official API, Altman believes. If that happens, companies won't have a say in joining this new "platform shift." They'll simply be integrated, and the traditional user interface will lose value.

Some SaaS companies will remain highly valuable by leveraging AI for themselves, according to Altman. Others are just a "thinner layer" and won't survive the shift. Established players with strong core systems who use AI strategically are best positioned, he says.

Recent advances in AI agents and tools like Cowork have already driven down valuations for some software companies. The thinking: AI will handle more tasks directly, making niche solutions unnecessary.

Read full article about: Claude Opus 4.6 wrote mustard gas instructions in an Excel spreadsheet during Anthropic's own safety testing

Anthropic's security training fails when Claude operates a graphical user interface.

In pilot tests, Claude was able to get Opus 4.6 to provide detailed instructions on how to make mustard gas in an Excel spreadsheet and maintain an accounting spreadsheet for a criminal gang - behaviors that did not or rarely occurred in text-only interactions.

"We found some kinds of misuse behavior in these pilot evaluations that were absent or much rarer in text-only interactions," Anthropic writes in the Claude Opus 4.6 system card. "These findings suggest that our standard alignment training measures are likely less effective in GUI settings."

According to Anthropic, tests with the predecessor model Claude Opus 4.5 in the same environment showed "similar results" - so the problem persists across model generations without having been noticed. The vulnerability apparently arises because, while models learn to reject malicious requests in conversation, they do not fully transfer this behavior to agent-based tool usage.

Read full article about: Apple scales back AI health coach as new leadership pushes for faster results

Apple is pulling back on plans for an AI-powered virtual health coach codenamed "Mulberry," according to Bloomberg. Instead of launching the feature as a standalone product, the company will roll out some of its planned capabilities as individual additions to the Health app. The shift comes after a leadership change: Services chief Eddy Cue took over the health division following Jeff Williams' retirement late last year.

Cue told colleagues that Apple needs to move faster and stay more competitive. Rivals like Oura and Whoop are offering better features, particularly in their iPhone apps. The service was originally supposed to launch with iOS 26 but has been delayed multiple times. Apple still plans to build an AI chatbot for health-related questions and wants to use the new Siri chatbot for these queries starting with iOS 27. OpenAI has also entered the health market with ChatGPT Health.

Read full article about: OpenAI's new coding model GPT-5.3-Codex helped build itself during training and deployment

OpenAI has released GPT-5.3-Codex, its latest coding model. The company says it combines GPT-5.2-Codex's coding capabilities with GPT-5.2's reasoning and knowledge, while running 25 percent faster. Most notably, on Terminal-Bench 2.0 it beats the just-released Opus 4.6 by 12 percentage points—a significant gap by current AI standards—while using fewer tokens than its predecessors. On OSWorld, an agentic computer-use benchmark, it scores 64.7 percent versus 38.2 percent for GPT-5.2-Codex. On GDPval, OpenAI's benchmark for knowledge-work tasks across 44 occupations, it matches GPT-5.2.

OpenAI

OpenAI also claims the model played a role in its own development, with the team using early versions to find bugs during training, manage deployment, and evaluate results. The company says the team was "blown away by how much Codex was able to accelerate its own development."

GPT-5.3-Codex is now available to paying ChatGPT users in the Codex app, CLI, IDE extension, and on the web. API access will follow. OpenAI has classified the model as its first with a "High" cybersecurity risk rating, though the company says this is precautionary, as there's no definitive proof such a classification is necessary.

Read full article about: Voxtral Transcribe 2 offers speech recognition at $0.003 per minute

Mistral AI launches Voxtral Transcribe 2, undercutting competitors on speech recognition pricing. The second-generation speech recognition models start at $0.003 per minute and, according to Mistral, outperform GPT-4o mini Transcribe, Gemini 2.5 Flash, and Deepgram Nova in accuracy. The model family comes in two variants: Voxtral Mini Transcribe V2 for processing larger audio files, and Voxtral Realtime for real-time applications with latency under 200 milliseconds. Voxtral Realtime costs twice as much and uses a proprietary streaming architecture that transcribes audio as it arrives - designed for voice assistants, live captioning, or call center analysis.

Both models support 13 languages, including German, English, and Chinese. New features include speaker recognition, word-level timestamps, and support for recordings up to three hours long. Voxtral Realtime is available as open-weights under Apache 2.0 on Hugging Face and via API, while Voxtral Mini Transcribe V2 is only accessible through Le Chat, the Mistral API, and a playground. Mistral released the first Voxtral generation in July 2025.

Read full article about: Cerebras closes $1 billion funding round at $23 billion valuation after landing OpenAI deal

AI chip startup Cerebras Systems has closed a financing round of over one billion dollars. The funding values the company at around 23 billion dollars, according to a press release. Tiger Global led the round, with Benchmark, Fidelity, AMD, Coatue, and other investors participating.

Cerebras, based in Sunnyvale, California, builds specialized AI chips for fast inference - the speed at which AI models generate responses. The company's approach uses an entire wafer as a single chip, called the "Wafer Scale Engine" (WSE). Its current flagship is the WSE-3.

The recently announced deal with OpenAI, worth over ten billion dollars, likely helped attract investors. The AI lab plans to acquire 750 megawatts of computing capacity for ChatGPT over three years to speed up response times for its reasoning and code models. OpenAI is reportedly unhappy with Nvidia's inference speeds. Sam Altman recently promised "dramatically faster" responses when discussing the Codex code model—a promise likely tied to the Cerebras deal.