Read full article about: Investors bet $1 billion on Yann LeCun's vision for AI beyond LLMs
Yann LeCun, former chief AI scientist at Meta and Turing Award winner, has raised over $1 billion for his new startup Advanced Machine Intelligence Labs (AMI Labs) - making it Europe's largest seed funding round ever. Investors include Nvidia, Bezos Expeditions, Singapore's Temasek, and France's Cathay Innovation.
The company was valued at $3.5 billion before the funding round. Alexandre LeBrun, former head of French startup Nabla, serves as CEO, while LeCun will take the role of board chair. The company is launching with about a dozen employees spread across Paris, New York, Singapore, and Montreal.
AMI Labs aims to build so-called world models that understand the physical environment - with applications in areas like robotics and transportation. According to LeCun and LeBrun, today's language models aren't up to the task. Meta isn't an investor but is expected to partner with AMI Labs.
Read full article about: Claude Code gets parallel AI agents that review code for bugs and security gaps
Anthropic has released a code review feature for Claude Code that automatically checks changes for errors before they're merged. Multiple AI agents work in parallel to catch bugs, security vulnerabilities, and regressions. The feature is available as a research preview for Team and Enterprise customers. According to the company, Anthropic has been using the system internally for months. Code output per developer has jumped 200 percent over the past year, turning manual review into a bottleneck.
Before deployment, 16 percent of changes received substantive comments - now it's 54 percent. For large changes over 1,000 lines, the system flags problems in 84 percent of cases, averaging 7.5 issues per change. Less than one percent of findings are marked as incorrect. The system doesn't approve any changes on its own - that stays with the developer. Costs are billed based on token consumption and average between 15 and 25 dollars per review, depending on size and complexity. Admins can set a monthly spending limit.
Anthropic is aggressively building out Claude Code this year. Recent additions include automated desktop functions, remote control for smartphones, a memory function, and a scheduling feature for planned tasks.
Ad
Read full article about: OpenAI plans to acquire Promptfoo and bake AI security testing directly into its Frontier enterprise platform
OpenAI plans to acquire Promptfoo, a security platform that helps companies catch and fix vulnerabilities in AI applications during development. If the deal goes through, the technology will be baked directly into OpenAI's Frontier enterprise platform, which companies use to build and deploy AI assistants.
The plan is to make automated security testing for prompt injections, jailbreaks, and data leaks a native part of Frontier. OpenAI also wants to beef up oversight, audit trails, and regulatory compliance tooling for enterprise AI deployments.
Promptfoo maintains a popular open-source project that will continue after the acquisition. The deal hasn't closed yet, and neither company has shared financial details. The startup had raised $23 million from investors at an $86 million valuation as of summer 2025.
Read full article about: Microsoft brings Anthropic's Claude Cowork into Copilot to run tasks across Outlook, Teams, and Excel
Microsoft has integrated Anthropic's Claude Cowork technology into Copilot. The new feature lets Microsoft 365 handle tasks more autonomously: users describe what they want done, and Cowork builds a plan that runs in the background, pulling from emails, meetings, files, and data across Outlook, Teams, and Excel. It's essentially Claude Cowork's approach, adapted for Microsoft's ecosystem. Use cases include calendar cleanup, meeting prep, company research, and product launch planning. When something's unclear, Cowork asks follow-up questions and waits for approval before making changes.
Cowork runs within Microsoft 365's existing security and compliance boundaries. It's currently in a limited research preview and is expected to become more widely available through the Frontier program by the end of March 2026.
Microsoft's growing willingness to work with AI providers outside OpenAI is notable. Claude Cowork builds on the principles behind Anthropic's Claude Code, which has picked up serious momentum among developers. OpenAI doesn't offer anything comparable yet, but is working on Frontier, an agent-based B2B framework designed to plug deeper into corporate IT.
Read full article about: Anthropic's groundbreaking lawsuit challenges the government's power to punish AI safety decisions
Anthropic is taking the US government to court. The AI developer filed a lawsuit in federal court in San Francisco against 17 federal agencies and the Executive Office of the President, claiming the government is punishing it for refusing to remove two guardrails from Claude: no lethal autonomous warfare and no mass surveillance of Americans.
The Department of War threatened Anthropic with two contradictory moves at once, the lawsuit states: invoke the Defense Production Act to force the company to hand over Claude, or ban it from the supply chain as a security risk. Anthropic argues the government can't claim a company is so essential it must be conscripted by law and so dangerous it should be blacklisted at the same time.
The lawsuit also challenges the legal basis for the government's actions. The statute cited, 10 U.S.C. § 3252, was written for cases where a foreign adversary might sabotage or subvert an information system. The government's own definition of "foreign adversary" covers China, Russia, Iran, North Korea, Cuba, and Venezuela.
Ad
Read full article about: Millions already use AI chatbots for financial advice, but experts warn of clear limits
Millions of people are already using chatbots like ChatGPT for retirement planning, the Financial Times reports. In a Lloyds Bank survey, more than half of respondents used AI for financial advice. However, experts point to clear limitations, including the UK's Financial Conduct Authority, which recently cautioned against AI hallucinations.
A test by Which? in November, for example, showed that popular chatbots like ChatGPT, Gemini, Perplexity, and Meta AI achieved overall scores of only 55 to 71 percent. Still, the pressure on the financial industry is significant: pension providers like Scottish Widows are now developing their own AI tools.
"I think that's the danger of AI is that people will assume they know what they don't," warns JPMorgan strategist John Bilton. According to Bilton, if users treat AI as an investment tool rather than a data tool, it risks making "underlying behavioural biases — such as the tendency to hold too much in cash or trade too often — stronger."
A counterexample is a 41-year-old software engineer who had ChatGPT restructure his entire $200,000 portfolio. ChatGPT advised him to diversify his risk exposure: 80 percent into a broad market equity index tracker and the remainder into a bond ETF. He told the Financial Times that speaking with the chatbot helped him to "commit to and actually execute" his plan.
U.S. Military strikes 3,000 targets in Iran with AI support, but oversight remains "underinvested"
The Wall Street Journal confirms and expands on previous reports about the massive use of generative AI in the U.S. military campaign against Iran. New details reveal how deeply AI is already embedded in intelligence, targeting, and logistics.
Ad
OpenAI employees hint at a new omni model
A new omni model from OpenAI? Employee posts and a leaked audio project called “BiDi” suggest the company is working on its next big multimodal upgrade.