Microsoft is disputing a report that it dialed back growth targets for its AI software business after many sales teams fell short last fiscal year.The Information reported that fewer than 20 percent of salespeople in one US unit hit a 50 percent growth target for Azure Foundry, the company's platform for building AI agents. In another group, the original 100 percent goal was reportedly reduced to 50 percent.
A new study from MATS and Anthropic shows that advanced AI models like Claude Opus 4.5, Sonnet 4.5, and GPT-5 can spot and exploit smart contract vulnerabilities in controlled tests. Using the SCONE-bench benchmark, which includes 405 real smart contract exploits from 2020 to 2025, the models produced simulated damage of up to 4.6 million dollars.
In a separate experiment, AI agents reviewed 2,849 new contracts and uncovered two previously unknown vulnerabilities. GPT-5 generated simulated revenue of 3,694 dollars at an estimated API cost of about 3,476 dollars, averaging a net gain of 109 dollars per exploit. All experiments were run in isolated sandbox environments.
The researchers say the findings point to real security risks but also show how the same models could help build stronger defensive tools. Anthropic recently released a study suggesting that AI systems can play a meaningful role in improving cybersecurity.
Google is rolling out Workspace Studio, a tool for building and managing AI agents inside Google Workspace. The platform lets users automate everything from simple tasks to multi-step processes without writing any code. At the core is the Gemini 3 agent model, which can work independently on long-running tasks and use tools along the way. In Workspace Studio, teams can set up workflows, add instructions, and plug in the tools an agent needs. Microsoft, OpenAI, and other companies are working on similar products.
The agents plug directly into Gmail, Drive, and Chat. They can also connect to services like Asana or Salesforce, though that kind of integration is generally discouraged for security reasons. Google says Workspace Studio will roll out to business customers in the coming weeks. For more background on AI agents, there's an AI Pro webinar and a Deep Dive covering the topic.
Google is testing an AI feature in its "Discover" news feed that automatically rewrites editorial headlines, often turning them into shorter, more provocative, or simply incorrect versions. The result is the kind of engagement-focused headline the company warns against in its own Discover rules, which explicitly reject clickbait.
Google's guidelines set clear limits on the kinds of headlines Discover should surface. | Image: via Google
Google told The Verge that the feature is a small test meant to help users grasp articles more quickly. Still, the AI-generated rewrites replace newsroom decisions with Google's own automated framing, raising concerns that the company is using AI to tighten its grip on how news is shaped and delivered on top of the influence it already exercises.
Black Forest Labs has raised USD 300 million in a new Series B round, bringing its valuation to USD 3.25 billion. Salesforce Ventures and Anjney Midha (AMP) led the round, with existing investors like a16z and Nvidia joining in, along with new backers including Canva and Figma Ventures.
The company is best known for its Flux image models, which it says are among the most widely used on Hugging Face and support products from partners such as Adobe, Meta, and Microsoft. Black Forest Labs released its newest model, Flux 2, just a few days ago.
The fresh funding will help the Freiburg- and San Francisco-based team accelerate its work on what it calls "visual intelligence" - models intended to combine perception, generation, memory, and logical reasoning. The company also plans to expand its team.
OpenAI researcher Sebastien Bubeck says GPT-5's math skills saved him a month of work. In a post on X, Bubeck reports that GPT-5 tackled a highly complex mathematical task for him. The model designed the solution path, ran a simulation to check a formula, and then wrote a complete proof, effectively a seamless calculation. While this process would have previously taken him around a month, GPT-5 finished it in just an afternoon. Bubeck calls it the "most impressive LLM output" he has seen to date.
Programmers who rely on AI assistants tend to ask fewer questions and learn more superficially, according to new research from Saarland University. A team led by Sven Apel found that students were less critical of the code suggestions they received when working with tools like GitHub Copilot. In contrast, pairs of human programmers asked more questions, explored alternatives, and learned more from one another.
Apel et al.
In the experiment, 19 students worked in pairs: six in human-only teams and seven in human-AI teams. According to Apel, many of the AI-assisted participants simply accepted code suggestions because they assumed the AI's output was already correct. He noted that this habit can introduce mistakes that later require significant effort to fix. Apel said AI tools can be helpful for straightforward tasks, but complex problems still benefit from real collaboration between humans.