Ad
Short

Deepmind co-founder Shane Legg puts the odds of achieving "minimal AGI" at 50 percent by 2028. In an interview with Hannah Fry, Legg lays out his framework for thinking about artificial general intelligence. He describes a scale running from minimal AGI through full AGI to artificial superintelligence (ASI). Minimal AGI means an artificial agent that can handle the cognitive tasks most humans typically perform. Full AGI covers the entire range of human cognition, including exceptional achievements like developing new scientific theories or composing symphonies.

Legg believes minimal AGI could arrive in roughly two years. Full AGI would follow three to six years later. To measure progress, he proposes a comprehensive test suite: if an AI system passes all typical human cognitive tasks, and human teams can't find any weak points even after months of searching with full access to every detail of the system, the goal has been reached.

Short

Adobe has integrated Photoshop, Acrobat, and Express directly into ChatGPT's interface. Users can now edit images and documents for free using text commands. The Photoshop integration lets people customize photos with simple descriptions; changing backgrounds or adding effects, for example. Adobe Express handles design tasks like creating invitations from templates, while Acrobat makes it possible to edit PDFs like resumes right in the chat.

To set it up, go to "Apps & Connectors" in ChatGPT's settings, select the Adobe app you want, and click "Connect." Then tap the plus sign in the chat, find the app under "More," and type your command. Alternatively, type "/AdobePhotoshop," "/AdobeExpress," or "/AdobeAcrobat" followed by what you want to do.

Adobe says commands work best when they're clear and specific, with complex tasks broken into individual steps. After each edit, sliders let users adjust the results.

Ad
Ad
Short

OpenAI wants to boost risk tolerance among its workforce. According to The Wall Street Journal, the company has scrapped a rule requiring new hires to stay for at least six months before their equity vests. The change aims to ease employee concerns about being laid off before receiving their first batch of shares. Previously, OpenAI had already shortened this waiting period from 12 months to six in April.

The move underscores the fierce competition for AI talent. Tech giants like Meta, Google, and Anthropic are courting top researchers with high compensation. OpenAI is set to spend around $6 billion on stock-based compensation this year, nearly half its projected revenue. These high personnel costs are putting additional pressure on margins in an increasingly competitive market.

Short

Google is integrating Gemini into Google Translate for better text translations and launching a beta for real-time voice translation through headphones. Gemini now handles idioms, local expressions, and slang more naturally instead of translating them word for word. The improved text translation is rolling out in the US and India for English and nearly 20 languages, including Spanish, Hindi, Chinese, Japanese, and German. The app is available for Android, iOS, and on the web.

The live translation feature taps into Gemini's speech-to-speech capabilities to preserve the speaker's tone, intonation, and rhythm. The beta is currently available on Android in the US, Mexico, and India, supporting over 70 languages. iOS and more countries will follow in 2026.

Google is also bringing its language learning tools to nearly 20 new countries, including Germany, India, Sweden, and Taiwan.

Short

OpenAI appears to be adopting the skills system Anthropic introduced in October, according to a discovery by user Elias Judin shared on X. Support for these skills has surfaced in both the Codex CLI tool and ChatGPT.

Judin found directories named "pdfs" and "spreadsheets" containing "skill.md" files. These files provide specific instructions for processing documents and data. It's basically like your prompt calling a more specific prompt to solve a complex subtask necessary for the main goal—like extracting text from a PDF. Since it's just a folder containing a Markdown file and maybe scripts, it's easy to adapt.

A look at the "skill.md" file for PDF handling reveals specific instructions for reading and creating documents. | Image: Elias Judin via GitHub

The file structure suggests OpenAI is organizing AI tools into app-like modules designed for specific tasks. Judin, who found the feature while using a "5.2 pro" model, documented the findings on GitHub. Anthropic debuted this modular system in October to help its Claude assistant handle specialized tasks.

Ad
Ad
Short

OpenAI claims its team built the Sora Android app in just 28 days by leveraging its code-generation AI, Codex. According to a report from OpenAI employees Patrick Hum and RJ Marsan, a small team of four engineers utilized an early version of the GPT-5.1 Codex model to build the application, processing around five billion tokens along the way.

According to the authors, the AI handled the bulk of the actual writing—specifically tasks like translating existing iOS code into Android-compatible formats. This allowed the human developers to focus on high-level architecture, planning, and verifying the results. The team described Codex as acting like a new, experienced colleague that just needed clear instructions to get the job done. Despite the rapid timeline, OpenAI reports the app is 99.9 percent stable. You can read a detailed breakdown of their process on the OpenAI blog.

Short

AI is reshaping the media landscape. Some companies are striking partnerships, others are fighting back against alleged copyright infringement, and some are doing both. To keep track of this shifting terrain, Columbia University's Tow Center has launched an "AI Deals and Disputes Tracker." The tool, part of the center's "Platforms and Publishers" project, monitors the evolving relationship between news publishers and AI companies by documenting lawsuits, business deals, and financial grants based on publicly available information.

The tracker lists major agreements and disputes between publishers and AI companies. | Image: Tow Center

The Tow Center says the overview gets updated at the start of each month, with the most recent data from December 12, 2025. The goal is to give readers a clear picture of the legal and economic shifts happening across the industry. Klaudia Jaźwińska compiles the data and welcomes tips on missing developments to keep the tracker up to date.

Ad
Ad
Short

Google has updated the voice for "Search Live." A new Gemini audio model powers the feature, producing responses that sound more natural and fluid, according to a blog post. Search Live lets users have real-time conversations while displaying relevant websites. The feature is part of Google Search's "AI Mode".

The update rolls out to all Search Live users in the US over the coming week. Users can open the Google app on Android or iOS, tap the Live icon, and speak their question.

The update fits into Google's broader push to build a voice-controlled assistant capable of handling everyday tasks—a goal shared by OpenAI and other major AI companies.

Google News