Hub Artificial Intelligence
Artificial intelligence is either a field of research, the most important technical innovation of mankind, its downfall – or simply a pipe dream from Silicon Valley. Can machines really be intelligent? What is intelligence anyway? What opportunities does AI technology offer, and what are the risks? From neural networks to the science fiction vision of super AI, from deepfakes to AI surveillance: THE DECODER delivers the latest AI news and information on all facets of artificial intelligence.
Anthropic and Snowflake have signed a multiyear, 200 million dollar partnership. Claude, Anthropic's language model, will be built directly into Snowflake's data platform, which is used by more than 12,600 companies worldwide. The goal is to let businesses run complex data analyses and interact with their data through natural language.
Anthropic CEO Dario Amodei says the collaboration is meant to bring safer AI capabilities into existing data systems. Snowflake CEO Sridhar Ramaswamy described the deal as a joint product effort designed to deliver practical value for customers. According to Anthropic, companies like Intercom and Simon Data are already using Claude through Snowflake Cortex AI for analytics and customer service automation.
The European Commission has opened a formal antitrust investigation into Meta. At the center is a new policy that makes it harder for third-party AI providers to offer their services through WhatsApp. Since October 2025, Meta has barred external providers from using the WhatsApp Business Solution if their primary product is AI. As a result, OpenAI had to remove its ChatGPT integration from WhatsApp.
Commission Vice President Teresa Ribera warned that dominant digital platforms could use their power to push rivals out of the market. The investigation covers the European Economic Area except for Italy, which is running its own review. If the Commission confirms the allegations, Meta could be found in violation of EU competition rules for abusing a dominant position. Regulators say they will handle the case as a priority.
Nvidia and OpenAI have not yet signed their planned 100 billion dollar deal. Nvidia CFO Colette Kress confirmed this on Tuesday during a conference in Arizona. Even though both companies announced plans in September to provide 10 gigawatts of Nvidia systems for OpenAI, the arrangement is still only a memorandum of understanding. Kress said the two sides are still working toward a final agreement.
The holdup raises new questions about the circular business structures that have become common in the tech industry, where large companies invest in startups that then spend the money on the investor's own products. Any future revenue from the OpenAI deal is not included in Nvidia's current 500 billion dollar forecast. A separate 10 billion dollar investment in competitor Anthropic also remains pending.
Microsoft is disputing a report that it dialed back growth targets for its AI software business after many sales teams fell short last fiscal year. The Information reported that fewer than 20 percent of salespeople in one US unit hit a 50 percent growth target for Azure Foundry, the company's platform for building AI agents. In another group, the original 100 percent goal was reportedly reduced to 50 percent.
Microsoft told CNBC that it has not changed its overall targets and claimed The Information mixed up growth with quotas. Even so, Microsoft's stock dropped more than two percent at times, suggesting investors are taking concerns about the sector's momentum seriously.
A new study from MATS and Anthropic shows that advanced AI models like Claude Opus 4.5, Sonnet 4.5, and GPT-5 can spot and exploit smart contract vulnerabilities in controlled tests. Using the SCONE-bench benchmark, which includes 405 real smart contract exploits from 2020 to 2025, the models produced simulated damage of up to 4.6 million dollars.

In a separate experiment, AI agents reviewed 2,849 new contracts and uncovered two previously unknown vulnerabilities. GPT-5 generated simulated revenue of 3,694 dollars at an estimated API cost of about 3,476 dollars, averaging a net gain of 109 dollars per exploit. All experiments were run in isolated sandbox environments.
The researchers say the findings point to real security risks but also show how the same models could help build stronger defensive tools. Anthropic recently released a study suggesting that AI systems can play a meaningful role in improving cybersecurity.
Google is rolling out Workspace Studio, a tool for building and managing AI agents inside Google Workspace. The platform lets users automate everything from simple tasks to multi-step processes without writing any code. At the core is the Gemini 3 agent model, which can work independently on long-running tasks and use tools along the way. In Workspace Studio, teams can set up workflows, add instructions, and plug in the tools an agent needs. Microsoft, OpenAI, and other companies are working on similar products.
The agents plug directly into Gmail, Drive, and Chat. They can also connect to services like Asana or Salesforce, though that kind of integration is generally discouraged for security reasons. Google says Workspace Studio will roll out to business customers in the coming weeks. For more background on AI agents, there's an AI Pro webinar and a Deep Dive covering the topic.
Google is testing an AI feature in its "Discover" news feed that automatically rewrites editorial headlines, often turning them into shorter, more provocative, or simply incorrect versions. The result is the kind of engagement-focused headline the company warns against in its own Discover rules, which explicitly reject clickbait.

Some of the changes are striking. At Ars Technica, the headline "Valve's Steam Machine looks like a console, but don't expect it to be priced like one" was replaced with the inaccurate "Steam Machine price revealed." A Mindfactory report titled "Radeon RX 9070 XT Outsells The Entire NVIDIA RTX 50 Series On Popular German Retailer" was shortened to "AMD GPU tops Nvidia."
Google told The Verge that the feature is a small test meant to help users grasp articles more quickly. Still, the AI-generated rewrites replace newsroom decisions with Google's own automated framing, raising concerns that the company is using AI to tighten its grip on how news is shaped and delivered on top of the influence it already exercises.