Ad
Short

Google is integrating Gemini into Google Translate for better text translations and launching a beta for real-time voice translation through headphones. Gemini now handles idioms, local expressions, and slang more naturally instead of translating them word for word. The improved text translation is rolling out in the US and India for English and nearly 20 languages, including Spanish, Hindi, Chinese, Japanese, and German. The app is available for Android, iOS, and on the web.

The live translation feature taps into Gemini's speech-to-speech capabilities to preserve the speaker's tone, intonation, and rhythm. The beta is currently available on Android in the US, Mexico, and India, supporting over 70 languages. iOS and more countries will follow in 2026.

Google is also bringing its language learning tools to nearly 20 new countries, including Germany, India, Sweden, and Taiwan.

Short

OpenAI appears to be adopting the skills system Anthropic introduced in October, according to a discovery by user Elias Judin shared on X. Support for these skills has surfaced in both the Codex CLI tool and ChatGPT.

Judin found directories named "pdfs" and "spreadsheets" containing "skill.md" files. These files provide specific instructions for processing documents and data. It's basically like your prompt calling a more specific prompt to solve a complex subtask necessary for the main goal—like extracting text from a PDF. Since it's just a folder containing a Markdown file and maybe scripts, it's easy to adapt.

A look at the "skill.md" file for PDF handling reveals specific instructions for reading and creating documents. | Image: Elias Judin via GitHub

The file structure suggests OpenAI is organizing AI tools into app-like modules designed for specific tasks. Judin, who found the feature while using a "5.2 pro" model, documented the findings on GitHub. Anthropic debuted this modular system in October to help its Claude assistant handle specialized tasks.

Short

OpenAI claims its team built the Sora Android app in just 28 days by leveraging its code-generation AI, Codex. According to a report from OpenAI employees Patrick Hum and RJ Marsan, a small team of four engineers utilized an early version of the GPT-5.1 Codex model to build the application, processing around five billion tokens along the way.

According to the authors, the AI handled the bulk of the actual writing—specifically tasks like translating existing iOS code into Android-compatible formats. This allowed the human developers to focus on high-level architecture, planning, and verifying the results. The team described Codex as acting like a new, experienced colleague that just needed clear instructions to get the job done. Despite the rapid timeline, OpenAI reports the app is 99.9 percent stable. You can read a detailed breakdown of their process on the OpenAI blog.

Ad
Ad
Short

AI is reshaping the media landscape. Some companies are striking partnerships, others are fighting back against alleged copyright infringement, and some are doing both. To keep track of this shifting terrain, Columbia University's Tow Center has launched an "AI Deals and Disputes Tracker." The tool, part of the center's "Platforms and Publishers" project, monitors the evolving relationship between news publishers and AI companies by documenting lawsuits, business deals, and financial grants based on publicly available information.

The tracker lists major agreements and disputes between publishers and AI companies. | Image: Tow Center

The Tow Center says the overview gets updated at the start of each month, with the most recent data from December 12, 2025. The goal is to give readers a clear picture of the legal and economic shifts happening across the industry. Klaudia Jaźwińska compiles the data and welcomes tips on missing developments to keep the tracker up to date.

Short

Google has updated the voice for "Search Live." A new Gemini audio model powers the feature, producing responses that sound more natural and fluid, according to a blog post. Search Live lets users have real-time conversations while displaying relevant websites. The feature is part of Google Search's "AI Mode".

The update rolls out to all Search Live users in the US over the coming week. Users can open the Google app on Android or iOS, tap the Live icon, and speak their question.

The update fits into Google's broader push to build a voice-controlled assistant capable of handling everyday tasks—a goal shared by OpenAI and other major AI companies.

Ad
Ad
Short

AI lab Anthropic has placed orders totaling $21 billion with Broadcom for Google's AI chips. Broadcom CEO Hock Tan confirmed that the startup is purchasing "Ironwood Racks" equipped with Google's Tensor Processing Units (TPUs).

The move follows a massive cloud partnership between Anthropic and Google announced in late October. That deal grants Anthropic access to up to one million TPUs and is expected to bring over one gigawatt of new AI compute capacity online by 2026. Anthropic maintains a multi-cloud strategy, spreading its workloads across Google TPUs, Amazon's Trainium chips, and Nvidia GPUs.

Ad
Ad
Short

On Thursday, President Donald Trump signed an executive order threatening to withhold federal funding from states whose AI regulations impede American innovation. Speaking to reporters, Trump argued that a patchwork of state-level rules slows down the industry.

The order directs the Secretary of Commerce to review state laws and, in cases of conflict, block access to the $42 billion broadband fund. Previous attempts to enforce such bans had failed.

The directive specifically targets anti-discrimination rules in states like Colorado, which the administration claims lead to "ideological bias." While tech giants like Google and OpenAI welcomed the move toward national standards, Representative Don Beyer warned that the order could create a "lawless Wild West" and potentially violates the Constitution by undermining state-level safety reforms.

Google News