Ad
Skip to content
Read full article about: Microsoft brings Anthropic's Claude Cowork into Copilot to run tasks across Outlook, Teams, and Excel

Microsoft has integrated Anthropic's Claude Cowork technology into Copilot. The new feature lets Microsoft 365 handle tasks more autonomously: users describe what they want done, and Cowork builds a plan that runs in the background, pulling from emails, meetings, files, and data across Outlook, Teams, and Excel. It's essentially Claude Cowork's approach, adapted for Microsoft's ecosystem. Use cases include calendar cleanup, meeting prep, company research, and product launch planning. When something's unclear, Cowork asks follow-up questions and waits for approval before making changes.

Cowork runs within Microsoft 365's existing security and compliance boundaries. It's currently in a limited research preview and is expected to become more widely available through the Frontier program by the end of March 2026.

Microsoft's growing willingness to work with AI providers outside OpenAI is notable. Claude Cowork builds on the principles behind Anthropic's Claude Code, which has picked up serious momentum among developers. OpenAI doesn't offer anything comparable yet, but is working on Frontier, an agent-based B2B framework designed to plug deeper into corporate IT.

Read full article about: Anthropic's groundbreaking lawsuit challenges the government's power to punish AI safety decisions

Anthropic is taking the US government to court. The AI developer filed a lawsuit in federal court in San Francisco against 17 federal agencies and the Executive Office of the President, claiming the government is punishing it for refusing to remove two guardrails from Claude: no lethal autonomous warfare and no mass surveillance of Americans.

The Department of War threatened Anthropic with two contradictory moves at once, the lawsuit states: invoke the Defense Production Act to force the company to hand over Claude, or ban it from the supply chain as a security risk. Anthropic argues the government can't claim a company is so essential it must be conscripted by law and so dangerous it should be blacklisted at the same time.

The lawsuit also challenges the legal basis for the government's actions. The statute cited, 10 U.S.C. § 3252, was written for cases where a foreign adversary might sabotage or subvert an information system. The government's own definition of "foreign adversary" covers China, Russia, Iran, North Korea, Cuba, and Venezuela.

Read full article about: Millions already use AI chatbots for financial advice, but experts warn of clear limits

Millions of people are already using chatbots like ChatGPT for retirement planning, the Financial Times reports. In a Lloyds Bank survey, more than half of respondents used AI for financial advice. However, experts point to clear limitations, including the UK's Financial Conduct Authority, which recently cautioned against AI hallucinations.

A test by Which? in November, for example, showed that popular chatbots like ChatGPT, Gemini, Perplexity, and Meta AI achieved overall scores of only 55 to 71 percent. Still, the pressure on the financial industry is significant: pension providers like Scottish Widows are now developing their own AI tools.

"I think that's the danger of AI is that people will assume they know what they don't," warns JPMorgan strategist John Bilton. According to Bilton, if users treat AI as an investment tool rather than a data tool, it risks making "underlying behavioural biases — such as the tendency to hold too much in cash or trade too often — stronger."

A counterexample is a 41-year-old software engineer who had ChatGPT restructure his entire $200,000 portfolio. ChatGPT advised him to diversify his risk exposure: 80 percent into a broad market equity index tracker and the remainder into a bond ETF. He told the Financial Times that speaking with the chatbot helped him to "commit to and actually execute" his plan.

Comment Source: FT
Ad

U.S. Military strikes 3,000 targets in Iran with AI support, but oversight remains "underinvested"

The Wall Street Journal confirms and expands on previous reports about the massive use of generative AI in the U.S. military campaign against Iran. New details reveal how deeply AI is already embedded in intelligence, targeting, and logistics.

Ad

Anthropic's Claude Opus 4.6 saw through an AI test, cracked the encryption, and grabbed the answers itself

Anthropic’s Claude Opus 4.6 independently figured out it was being tested during a benchmark, identified the specific test, and cracked its encrypted answer key. According to Anthropic, this is the first documented case of its kind.

Ad

Hallucinated references are passing peer review at top AI conferences and a new open tool wants to fix that

Fake citations are slipping past peer review at top AI conferences, and commercial LLMs can’t spot the fakes they generate. A new open-source tool called CiteAudit allegedly catches what GPT, Gemini, and Claude miss.