Ad
Skip to content
Read full article about: Investors bet $1 billion on Yann LeCun's vision for AI beyond LLMs

Yann LeCun, former chief AI scientist at Meta and Turing Award winner, has raised over $1 billion for his new startup Advanced Machine Intelligence Labs (AMI Labs) - making it Europe's largest seed funding round ever. Investors include Nvidia, Bezos Expeditions, Singapore's Temasek, and France's Cathay Innovation.

The company was valued at $3.5 billion before the funding round. Alexandre LeBrun, former head of French startup Nabla, serves as CEO, while LeCun will take the role of board chair. The company is launching with about a dozen employees spread across Paris, New York, Singapore, and Montreal.

AMI Labs aims to build so-called world models that understand the physical environment - with applications in areas like robotics and transportation. According to LeCun and LeBrun, today's language models aren't up to the task. Meta isn't an investor but is expected to partner with AMI Labs.

Comment Source: AMILabs | FT
Read full article about: Claude Code gets parallel AI agents that review code for bugs and security gaps

Anthropic has released a code review feature for Claude Code that automatically checks changes for errors before they're merged. Multiple AI agents work in parallel to catch bugs, security vulnerabilities, and regressions. The feature is available as a research preview for Team and Enterprise customers. According to the company, Anthropic has been using the system internally for months. Code output per developer has jumped 200 percent over the past year, turning manual review into a bottleneck.

Before deployment, 16 percent of changes received substantive comments - now it's 54 percent. For large changes over 1,000 lines, the system flags problems in 84 percent of cases, averaging 7.5 issues per change. Less than one percent of findings are marked as incorrect. The system doesn't approve any changes on its own - that stays with the developer. Costs are billed based on token consumption and average between 15 and 25 dollars per review, depending on size and complexity. Admins can set a monthly spending limit.

Anthropic is aggressively building out Claude Code this year. Recent additions include automated desktop functions, remote control for smartphones, a memory function, and a scheduling feature for planned tasks.

Read full article about: OpenAI plans to acquire Promptfoo and bake AI security testing directly into its Frontier enterprise platform

OpenAI plans to acquire Promptfoo, a security platform that helps companies catch and fix vulnerabilities in AI applications during development. If the deal goes through, the technology will be baked directly into OpenAI's Frontier enterprise platform, which companies use to build and deploy AI assistants.

The plan is to make automated security testing for prompt injections, jailbreaks, and data leaks a native part of Frontier. OpenAI also wants to beef up oversight, audit trails, and regulatory compliance tooling for enterprise AI deployments.

Promptfoo maintains a popular open-source project that will continue after the acquisition. The deal hasn't closed yet, and neither company has shared financial details. The startup had raised $23 million from investors at an $86 million valuation as of summer 2025.

Read full article about: Microsoft brings Anthropic's Claude Cowork into Copilot to run tasks across Outlook, Teams, and Excel

Microsoft has integrated Anthropic's Claude Cowork technology into Copilot. The new feature lets Microsoft 365 handle tasks more autonomously: users describe what they want done, and Cowork builds a plan that runs in the background, pulling from emails, meetings, files, and data across Outlook, Teams, and Excel. It's essentially Claude Cowork's approach, adapted for Microsoft's ecosystem. Use cases include calendar cleanup, meeting prep, company research, and product launch planning. When something's unclear, Cowork asks follow-up questions and waits for approval before making changes.

Cowork runs within Microsoft 365's existing security and compliance boundaries. It's currently in a limited research preview and is expected to become more widely available through the Frontier program by the end of March 2026.

Microsoft's growing willingness to work with AI providers outside OpenAI is notable. Claude Cowork builds on the principles behind Anthropic's Claude Code, which has picked up serious momentum among developers. OpenAI doesn't offer anything comparable yet, but is working on Frontier, an agent-based B2B framework designed to plug deeper into corporate IT.

U.S. Military strikes 3,000 targets in Iran with AI support, but oversight remains "underinvested"

The Wall Street Journal confirms and expands on previous reports about the massive use of generative AI in the U.S. military campaign against Iran. New details reveal how deeply AI is already embedded in intelligence, targeting, and logistics.

Read full article about: OpenAI hardware and robotics leader quits over military deal she says lacked enough deliberation

Update, March 9, 2026:

OpenAI released the following statement:

Caitlin Kalinowski was not the head of all robotics at OpenAI. She was responsible for hardware and operational topics within the Robotics Division. She was also not a researcher and did not lead Robotics Engineering. The Robotics Division is led by Aditya Ramesh, while the Consumer Hardware Division is headed by Peter Welinder.

Original article from March 8, 2026:

OpenAI's hardware and robotics chief Caitlin Kalinowski resigned over the company's military collaboration, announcing her decision on LinkedIn and X. She says surveillance without judicial oversight and lethal autonomy without human sign-off "deserved more deliberation than they got." Kalinowski joined from Meta in November 2024, where she built the Orion AR headset.

Caitlin Kalinowski announced her resignation from OpenAI on LinkedIn, citing concerns about surveillance and lethal autonomy. | Kalinowski via LinkedIn

Her departure follows a contract between OpenAI and the Pentagon giving the military access to its models, a deal Anthropic had already rejected over safety concerns. OpenAI says the contract includes the same hard red lines against mass surveillance and autonomous weapons that Anthropic demanded. But the company agreed to softer "all lawful use" language that still leaves room for interpretation. The US government now wants to make that wording standard for all AI companies working with the state.

Read full article about: Trump administration drafts AI contract rules requiring companies to license systems for "all lawful use"

The Trump administration has drafted strict new guidelines for civilian AI contracts. Per a draft seen by the Financial Times, AI companies would have to grant the government an irrevocable license for "all lawful use," the exact wording Anthropic has resisted and OpenAI has accepted.

The GSA guidelines, drafted over recent months, also ban ideological or partisan judgments in AI outputs, such as favoring diversity programs, which is itself an ideological requirement and echoes China's political guardrails for AI manufacturers. Another clause requires disclosure of any model tweaks made to comply with non-US regulations like the EU Digital Services Act.

The guidelines land amid the Anthropic fallout: last week, the Pentagon killed a $200 million contract after the company demanded restrictions on mass surveillance of US citizens and autonomous weapons for reliability reasons. Defense Secretary Pete Hegseth accused Anthropic of seeking veto power over military decisions, and the White House labeled it a supply chain risk.