Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: AI agents in GitHub and GitLab workflows create new enterprise security risks

Aikido Security warns that plugging AI agents into GitHub and GitLab workflows opens up a serious vulnerability in enterprise environments. The issue hits widely used tools like Gemini CLI, Claude Code, OpenAI Codex, and GitHub AI Inference.

According to the security firm, attackers can slip hidden instructions into issues, pull requests, or commits. That text then flows straight into model prompts, where the AI interprets it as a command instead of harmless content. Because these agents often have permission to run shell commands or modify repos, a single prompt injection can leak secrets or alter workflows. Aikido says tests showed this risk affected at least five Fortune 500 companies.

Aikido

Google patched the issue in its Gemini CLI repo within four days, according to the report. To help organizations secure their pipelines, Aikido published open search rules and recommends limiting the tools available to AI agents, validating all inputs, and avoiding the direct execution of AI outputs.

Read full article about: Google rolls out Gemini 3 "Deep Think" for Gemini Ultra subscribers

Google AI just released an updated "Deep Think" mode for Google AI Ultra subscribers using the Gemini app. Built on the Gemini 3 model, the feature aims to boost the AI's reasoning skills. Google says the mode uses "advanced parallel thinking" to investigate multiple hypotheses at the same time, making these models better suited for complex scientific tasks than for mundane office work.

The technology "builds on" on the Deep Think variant of Gemini 2.5, which recently posted impressive scores at the International Mathematical Olympiad and a major programming competition. To try it out, subscribers select "Deep Think" in the app's input field and choose the "Gemini 3 Pro" model from the menu. The Ultra subscription currently costs $250 per month for the standard plan.

The release looks like a direct response to DeepsSeek's new open-source math model and an upcoming system from OpenAI. Reports suggest OpenAI plans to launch its new model next week, with performance expected to outperform Gemini 3.

Read full article about: EU plans five AI gigafactories with 100,000 high-performance AI chips

The European Union is planning a major expansion of its AI infrastructure. The European Investment Bank (EIB) and the European Commission want to build up to five AI gigafactories across Europe to boost compute capacity for advanced AI models and reduce the region's dependence on foreign technology.

The Commission plans to fund the effort with 20 billion euros through its InvestAI program, and the EIB is considering additional loans. Each site will include about 100,000 high-performance AI chips, described as "the most advanced" available and roughly four times more than existing facilities.

"AI gigafactories will train the most complex, very large AI models, which require extensive computing infrastructure for breakthroughs in domains such as medicine, cleantech and space."

The project falls under the EIB's TechEU program, which aims to mobilize 250 billion euros in investment by 2027.

Comment Source: EIB
Read full article about: Microsoft pushes back on report claiming it cut AI sales targets

Microsoft is disputing a report that it dialed back growth targets for its AI software business after many sales teams fell short last fiscal year. The Information reported that fewer than 20 percent of salespeople in one US unit hit a 50 percent growth target for Azure Foundry, the company's platform for building AI agents. In another group, the original 100 percent goal was reportedly reduced to 50 percent.

Microsoft told CNBC that it has not changed its overall targets and claimed The Information mixed up growth with quotas. Even so, Microsoft's stock dropped more than two percent at times, suggesting investors are taking concerns about the sector's momentum seriously.

Read full article about: Anthropic study shows leading AI models racking up millions in simulated smart contract exploits

A new study from MATS and Anthropic shows that advanced AI models like Claude Opus 4.5, Sonnet 4.5, and GPT-5 can spot and exploit smart contract vulnerabilities in controlled tests. Using the SCONE-bench benchmark, which includes 405 real smart contract exploits from 2020 to 2025, the models produced simulated damage of up to 4.6 million dollars.

Anthropic

In a separate experiment, AI agents reviewed 2,849 new contracts and uncovered two previously unknown vulnerabilities. GPT-5 generated simulated revenue of 3,694 dollars at an estimated API cost of about 3,476 dollars, averaging a net gain of 109 dollars per exploit. All experiments were run in isolated sandbox environments.

The researchers say the findings point to real security risks but also show how the same models could help build stronger defensive tools. Anthropic recently released a study suggesting that AI systems can play a meaningful role in improving cybersecurity.