Ad
Skip to content
Read full article about: OpenAI's new coding model GPT-5.3-Codex helped build itself during training and deployment

OpenAI has released GPT-5.3-Codex, its latest coding model. The company says it combines GPT-5.2-Codex's coding capabilities with GPT-5.2's reasoning and knowledge, while running 25 percent faster. Most notably, on Terminal-Bench 2.0 it beats the just-released Opus 4.6 by 12 percentage points—a significant gap by current AI standards—while using fewer tokens than its predecessors. On OSWorld, an agentic computer-use benchmark, it scores 64.7 percent versus 38.2 percent for GPT-5.2-Codex. On GDPval, OpenAI's benchmark for knowledge-work tasks across 44 occupations, it matches GPT-5.2.

OpenAI

OpenAI also claims the model played a role in its own development, with the team using early versions to find bugs during training, manage deployment, and evaluate results. The company says the team was "blown away by how much Codex was able to accelerate its own development."

GPT-5.3-Codex is now available to paying ChatGPT users in the Codex app, CLI, IDE extension, and on the web. API access will follow. OpenAI has classified the model as its first with a "High" cybersecurity risk rating, though the company says this is precautionary, as there's no definitive proof such a classification is necessary.

OpenAI's Frontier gives AI agents employee-like identities, shared context, and enterprise permissions

OpenAI’s new Frontier platform gives AI agents in companies their own identities, shared context, and the ability to learn from experience. The software launches first with selected enterprise customers.

Read full article about: Voxtral Transcribe 2 offers speech recognition at $0.003 per minute

Mistral AI launches Voxtral Transcribe 2, undercutting competitors on speech recognition pricing. The second-generation speech recognition models start at $0.003 per minute and, according to Mistral, outperform GPT-4o mini Transcribe, Gemini 2.5 Flash, and Deepgram Nova in accuracy. The model family comes in two variants: Voxtral Mini Transcribe V2 for processing larger audio files, and Voxtral Realtime for real-time applications with latency under 200 milliseconds. Voxtral Realtime costs twice as much and uses a proprietary streaming architecture that transcribes audio as it arrives - designed for voice assistants, live captioning, or call center analysis.

Both models support 13 languages, including German, English, and Chinese. New features include speaker recognition, word-level timestamps, and support for recordings up to three hours long. Voxtral Realtime is available as open-weights under Apache 2.0 on Hugging Face and via API, while Voxtral Mini Transcribe V2 is only accessible through Le Chat, the Mistral API, and a playground. Mistral released the first Voxtral generation in July 2025.

Read full article about: Amazon launches AI Studio to cut film and TV production costs

Amazon plans to use AI to speed up film and TV production while reducing costs. Albert Cheng heads the "AI Studio" at Amazon MGM Studio, which will launch a closed beta program with industry partners in March 2026. Results are expected in May, Reuters reports.

The AI tools aim to bridge the "last mile" between existing AI offerings and what directors actually need. This includes better character consistency across different shots and integration with industry-standard creative tools. Amazon is working with multiple language model providers. Producers like Robert Stromberg ("Maleficent") and animator Colin Brady are already testing the tools, according to the report. The series "House of David" on Amazon is already using AI: for season two, director Jon Erwin combined AI with live-action footage for battle scenes.

Cheng said high production costs are making it harder to create new content. AI should speed up processes but not replace people. Writers, directors, and actors will remain involved in every step. Amazon has cut around 30,000 jobs since October, including at Prime Video.

Read full article about: Cerebras closes $1 billion funding round at $23 billion valuation after landing OpenAI deal

AI chip startup Cerebras Systems has closed a financing round of over one billion dollars. The funding values the company at around 23 billion dollars, according to a press release. Tiger Global led the round, with Benchmark, Fidelity, AMD, Coatue, and other investors participating.

Cerebras, based in Sunnyvale, California, builds specialized AI chips for fast inference - the speed at which AI models generate responses. The company's approach uses an entire wafer as a single chip, called the "Wafer Scale Engine" (WSE). Its current flagship is the WSE-3.

The recently announced deal with OpenAI, worth over ten billion dollars, likely helped attract investors. The AI lab plans to acquire 750 megawatts of computing capacity for ChatGPT over three years to speed up response times for its reasoning and code models. OpenAI is reportedly unhappy with Nvidia's inference speeds. Sam Altman recently promised "dramatically faster" responses when discussing the Codex code model—a promise likely tied to the Cerebras deal.

Read full article about: Chinese AI video model Kling 3.0 takes another step toward usable creative assets

Chinese company Kling has released video model 3.0. The new model is described as an "all-in-one creative engine" for multimodal creation. Key features include improved consistency for characters and elements, video production with 15-second clips and better control, and customizable multi-shot recording. Audio features now support multiple character references along with additional languages and accents. For image generation, Kling 3.0 offers 4K output, a new continuous shooting mode, and what the company calls "more cinematic visuals."

Ultra subscribers get exclusive early access through the Kling AI website. Official details on a general release, API access, or technical documentation aren't available yet. The Kling team published a paper on the Kling Omni models in December 2025. The YouTube channel "Theoretically Media" got early access and published a detailed first impression video. According to the channel, the model should roll out to other subscription levels within a week.