Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: Terence Tao proposes "artificial general cleverness" as a more honest label for what AI actually does

Renowned mathematician Terence Tao has proposed a new way to think about AI capabilities. On Mastodon, Tao questions whether true "artificial general intelligence" (AGI) is actually achievable with current AI tools. His alternative: "artificial general cleverness" (AGC).

According to Tao, "general cleverness" means the ability to solve complex problems using partly improvised methods. These solutions might be random, rely on raw computing power, or draw from training data. That makes them something other than true "intelligence," but they can still succeed at many tasks, especially when strict testing procedures filter out incorrect results, he says.

"This results in the somewhat unintuitive combination of a technology that can be very useful and impressive, while simultaneously being fundamentally unsatisfying and disappointing."

Terence Tao

In humans, cleverness and intelligence are linked, but in AI they're decoupled, Tao argues. The mathematician has recently spoken positively about how AI has sped up his own work.

Read full article about: Google launches new AI agent to help plan your day

The experimental productivity assistant called CC comes from Google Labs and runs on Gemini. After signing up, CC connects to Gmail, Google Calendar, Google Drive, and the internet to understand your daily routine. AI agents with access to private data like this raise familiar security concerns.

Every morning, CC sends an email summary called "Your Day Ahead." It pulls together your appointments, important tasks, and relevant updates, like upcoming bills or deadlines. The agent can also draft emails and create calendar entries when needed. Users control CC by replying to its emails, sharing preferences, or asking it to remember ideas and tasks.

CC is launching as an early test for users 18 and older in the US and Canada. You'll need a personal Google account plus a subscription to Google AI Ultra or another paid service. Those interested can sign up for the waitlist on the Google Labs website.

Read full article about: Google's updated Gemini 2.5 Flash Native Audio handles complex voice tasks better

Google has released an update for Gemini 2.5 Flash Native Audio that makes voice assistants more capable. The model now handles complex workflows better, follows user instructions more precisely, and conducts more natural conversations. Compliance with developer instructions jumped from 84 to 90 percent, and call quality in multi-step conversations has also improved.

According to Google, the updated audio model scores 71.5 percent accuracy on function calls in the ComplexFuncBench benchmark, putting it ahead of OpenAI's gpt-realtime at 66.5 percent. It's worth noting, though, that Google likely didn't test against the latest realtime version, which OpenAI released just yesterday.

The update is now available in Google AI Studio, Vertex AI, Gemini Live, and Search Live. Google Cloud customers are already using the technology, and developers can test the model through the Gemini API.

Read full article about: CHT blasts Trump's executive order for creating an AI accountability vacuum

The Center for Humane Technology (CHT), a nonprofit organization advocating for ethical technology, has criticized a new executive order from the Trump administration that aims to undermine state AI laws.

According to the CHT, the regulation puts public safety at risk by preventing states from meaningfully regulating AI. At the same time, it offers no national replacement framework, creating what the organization calls a vacuum in accountability.

Americans understand the potential benefits and dangers of this technology. They believe government should help regulate AI, not provide a regulatory shield to an industry that prioritizes growth at any cost. (CHT)

The CHT points to documented AI harms, including deepfakes, fraud, and chatbot-related suicides among young people. Social media already showed what happens when technology goes unregulated, the organization argues. The government should protect the public instead of caving to the tech industry.

Trump argues that varying state regulations are slowing down the industry. AI companies like Anthropic, OpenAI, and Google support national regulation.

Comment Source: CHT
Read full article about: Deepmind co-founder Shane Legg sees 50 percent chance of "minimal AGI" by 2028

Deepmind co-founder Shane Legg puts the odds of achieving "minimal AGI" at 50 percent by 2028. In an interview with Hannah Fry, Legg lays out his framework for thinking about artificial general intelligence. He describes a scale running from minimal AGI through full AGI to artificial superintelligence (ASI). Minimal AGI means an artificial agent that can handle the cognitive tasks most humans typically perform. Full AGI covers the entire range of human cognition, including exceptional achievements like developing new scientific theories or composing symphonies.

Legg believes minimal AGI could arrive in roughly two years. Full AGI would follow three to six years later. To measure progress, he proposes a comprehensive test suite: if an AI system passes all typical human cognitive tasks, and human teams can't find any weak points even after months of searching with full access to every detail of the system, the goal has been reached.