Ad
Skip to content
Read full article about: OpenAI quietly adopts Anthropic’s modular skills framework to boost agent capabilities

OpenAI appears to be adopting the skills system Anthropic introduced in October, according to a discovery by user Elias Judin shared on X. Support for these skills has surfaced in both the Codex CLI tool and ChatGPT.

Judin found directories named "pdfs" and "spreadsheets" containing "skill.md" files. These files provide specific instructions for processing documents and data. It's basically like your prompt calling a more specific prompt to solve a complex subtask necessary for the main goal—like extracting text from a PDF. Since it's just a folder containing a Markdown file and maybe scripts, it's easy to adapt.

A look at the "skill.md" file for PDF handling reveals specific instructions for reading and creating documents. | Image: Elias Judin via GitHub

The file structure suggests OpenAI is organizing AI tools into app-like modules designed for specific tasks. Judin, who found the feature while using a "5.2 pro" model, documented the findings on GitHub. Anthropic debuted this modular system in October to help its Claude assistant handle specialized tasks.

Read full article about: OpenAI claims four engineers and Codex build the Sora Android app in just 28 days

OpenAI claims its team built the Sora Android app in just 28 days by leveraging its code-generation AI, Codex. According to a report from OpenAI employees Patrick Hum and RJ Marsan, a small team of four engineers utilized an early version of the GPT-5.1 Codex model to build the application, processing around five billion tokens along the way.

According to the authors, the AI handled the bulk of the actual writing—specifically tasks like translating existing iOS code into Android-compatible formats. This allowed the human developers to focus on high-level architecture, planning, and verifying the results. The team described Codex as acting like a new, experienced colleague that just needed clear instructions to get the job done. Despite the rapid timeline, OpenAI reports the app is 99.9 percent stable. You can read a detailed breakdown of their process on the OpenAI blog.

Read full article about: Google improves "Search Live" with new AI voice

Google has updated the voice for "Search Live." A new Gemini audio model powers the feature, producing responses that sound more natural and fluid, according to a blog post. Search Live lets users have real-time conversations while displaying relevant websites. The feature is part of Google Search's "AI Mode".

The update rolls out to all Search Live users in the US over the coming week. Users can open the Google app on Android or iOS, tap the Live icon, and speak their question.

The update fits into Google's broader push to build a voice-controlled assistant capable of handling everyday tasks—a goal shared by OpenAI and other major AI companies.

Read full article about: Anthropic places $21 billion order for Google chips via Broadcom

AI lab Anthropic has placed orders totaling $21 billion with Broadcom for Google's AI chips. Broadcom CEO Hock Tan confirmed that the startup is purchasing "Ironwood Racks" equipped with Google's Tensor Processing Units (TPUs).

The move follows a massive cloud partnership between Anthropic and Google announced in late October. That deal grants Anthropic access to up to one million TPUs and is expected to bring over one gigawatt of new AI compute capacity online by 2026. Anthropic maintains a multi-cloud strategy, spreading its workloads across Google TPUs, Amazon's Trainium chips, and Nvidia GPUs.

Comment Source: CNBC