Ad
Skip to content
Read full article about: A rogue AI agent caused a serious security incident at Meta

An AI agent acting on its own triggered a significant security breach at Meta, The Information reports.

Last week, a Meta engineer used an internal agent tool to analyze a technical question another employee had posted in an internal forum. The agent then posted a response to the forum on its own - without any authorization. A second employee followed the agent's advice, setting off a chain reaction: for nearly two hours, systems containing sensitive corporate and user data were accessible to unauthorized employees.

Meta classified the incident as Sev 1, its second-highest security level. A Meta spokesperson said no user data was misused and there's no evidence anyone exploited the access or made any data public. The agent's post was at least labeled as AI-generated.

This isn't an isolated case. Summer Yue, head of safety at Meta's AI division, described on X back in February how an OpenClaw agent independently deleted emails despite clear instructions not to - and ignored her commands to stop. Amazon Web Services dealt with a similar problem in December, when agent-driven code changes contributed to a 13-hour outage of one of its tools.

Read full article about: OpenAI's AWS deal may undermine Microsoft's Azure exclusivity rights

Microsoft fears OpenAI's AWS deal may violate Azure exclusivity contract.

"We are confident that OpenAI understands and respects the importance of living up to [its] legal obligation," a Microsoft spokesperson told The Information. A statement that sounds less like confidence and more like a warning.

Microsoft holds the exclusive rights to sell OpenAI's models directly to cloud customers through its Azure platform. But OpenAI and AWS are planning a new product, what they call a "stateful runtime environment," that runs OpenAI models entirely on AWS infrastructure without relying on the Microsoft-hosted versions.

AWS doesn't intend to sell model APIs directly but rather offer tools for developing custom AI applications, effectively sidestepping the contractual exclusivity on a technical level.

Read full article about: Google Labs turns Stitch into a full AI design platform that converts plain text into user interfaces

Google Labs has turned its design tool Stitch into a full AI-powered software design platform. The tool lets users generate user interfaces from natural language prompts, an approach Google is calling "vibe design." Instead of starting with traditional wireframes, users simply describe what they want the experience to look and feel like. Stitch provides an infinite canvas where images, text, and code can all be dropped in as context.

A new design agent analyzes the entire project and can explore multiple ideas at the same time. Users can make real-time changes directly on the canvas using voice control. Design rules can be shared across tools through a new DESIGN.md format, and static designs get converted straight into clickable prototypes.

Stitch is live at stitch.withgoogle.com for users 18 and older in every region where Gemini is available. Developers can also plug it into tools like AI Studio via an MCP server and an SDK. Google is pitching the tool at both professional designers and founders who have no design background.

Read full article about: Google Deepmind upgrades Gemini API with multi-tool chaining and context circulation

Google Deepmind has expanded the Gemini API with several new tools for developers. Built-in tools like Google Search and Google Maps can now be combined with custom functions in a single request. Previously, developers had to handle each step separately, which was slower and more cumbersome.

Results from one tool can now be automatically passed to another through what Google calls context circulation. Each tool call also gets a unique ID, making it easier to track down bugs.

Moreover, Google Maps is now available as a data source for the Gemini 3 model family, providing location data, business information, and commute times. Google recommends the new Interactions API for building these workflows.

OpenAI ships GPT-5.4 mini and nano, faster and more capable but up to 4x pricier

OpenAI has released two new compact models—GPT-5.4 mini and nano—built for coding assistants, subagents, and computer control. GPT-5.4 mini nearly matches the full model’s performance, but both new models come with a steep price hike over their predecessors.

GTC 2026: With Groq 3 LPX, Nvidia adds dedicated inference hardware to its platform for the first time

At GTC 2026, Nvidia expanded the Vera Rubin platform it introduced at CES with custom CPU racks, dedicated inference chips, a new storage architecture, an inference operating system, open model alliances, and agent security software.

Read full article about: Mistral's new Small 4 model punches above its weight with 128 expert modules

Mistral AI has released Mistral Small 4, combining fast text responses, logical reasoning, and image processing in one model. It has 119 billion parameters, but only 6 billion are active per query - its architecture includes 128 expert modules but activates just four at a time. Users can control whether the model responds quickly or thinks more thoroughly. Mistral AI says it's 40 percent faster and handles three times more queries per second than its predecessor.

Balkendiagramm zeigt die Benchmark-Ergebnisse von Mistral Small 4 High im Vergleich zu Magistral Medium 1.2 und Magistral Small 1.2 in den Kategorien LCR, AIME25, Collie und LiveCodeBench.
Mistral Small 4 with a high reasoning level achieves similar or better values in internal benchmarks than the specialized Magistral models.

The model ships under the Apache 2.0 license and is available on Hugging Face, the Mistral API, and Nvidia platforms. Mistral AI is also joining the Nvidia Nemotron Coalition, which promotes open AI model development. The company previously released multimodal open-source models in early December with the Mistral 3 series, including the flagship Mistral Large 3 with 675 billion parameters.