Ad
Short

Anthropic's Claude now integrates directly with Microsoft 365, letting users pull in content from Outlook, SharePoint, OneDrive, and Teams. This means emails, calendars, and documents can show up right in a Claude chat, making it easier to bring company data into the conversation.

Anthropic is also adding a centralized search that covers company resources like HR documents and team guidelines. These features are available to Claude Team and Enterprise users, but admins have to enable them first.

The rollout follows Microsoft's recent Copilot integration in Windows, showing just how closely the competition for AI in the workplace is tied to the digital office ecosystem.

Ad
Ad
Short

According to two people familiar with the matter, AI startup Anthropic expects to nearly triple its annual revenue by 2026, Reuters reports. The company projects an annualized revenue of $9 billion by the end of 2025 and is targeting more than $20 billion in 2026 under its base scenario, or as much as $26 billion in the most optimistic case.

Anthropic told Reuters that its annualized revenue stood at around $7 billion as of October. Most of the company's growth comes from enterprise clients, who account for roughly 80 percent of its revenue. The startup was recently valued at $183 billion after raising $13 billion in new funding.

Short

Bloomberg reports that Ke Yang, head of Apple's AI search division, is leaving the company to join Meta Platforms. Yang led Apple's "Answers, Knowledge and Information" (AKI) team, which develops features designed to make Siri more like ChatGPT by giving it access to web content.

The AKI group plays a key role in a major Siri update planned for March, part of Apple's push to strengthen its AI offerings. Yang's departure is one of several recent exits from Apple's AI divisions, including members of the "Apple Foundation Models" group and former executives like Robby Walker and Ruoming Pang, who also moved to Meta. After Yang's exit, the AKI team will report to Benoit Dupin, a deputy under Apple's AI chief, John Giannandrea.

Ad
Ad
Short

Google Labs Creative Director Henry Daubrez says the new Veo 3.1 update is being overhyped. Though it adds helpful features like image-to-image animation, he sees the changes as minor. Daubrez blames financial pressure in the AI industry for turning small updates into big marketing moments. Veo is part of Flow by Google.

"The bigger issue is that the enormous financial stakes around AI are turning timelines into marketing noise, with rumors inflated to the point where every incremental update gets treated like a paradigm shift."

Henry Daubrez

Short

Google Deepmind and Yale University have introduced a new AI model called C2S-Scale 27B, built on the open Gemma model family. The system analyzes single cells and, according to Google Deepmind, uncovered a previously unknown therapeutic pathway for cancer. The model identified the drug silmitasertib (CX-4945) as a "conditional enhancer" - meaning it can make tumor cells more visible to the immune system under specific conditions. In the team's words:

"This result also provides a blueprint for a new kind of biological discovery. It demonstrates that by following the scaling laws and building larger models like C2S-Scale 27B, we can create predictive models of cellular behavior that are powerful enough to run high-throughput virtual screens, discover context-conditioned biology, and generate biologically-grounded hypotheses."

Google Deepmind

Lab experiments with human neuroendocrine cell models confirmed the prediction. Before that, C2S-Scale simulated over 4,000 drugs in two immune environments. The code is available on GitHub, and the model can be found on Hugging Face. More details are in the preprint on bioRxiv.

Short

Apple's new M5 chip is aimed at speeding up AI features on the MacBook Pro, iPad Pro, and Apple Vision Pro. Apple says the M5 delivers more than four times the GPU performance for AI tasks compared to the previous M4 chip. Each of the ten GPU cores includes a built-in "Neural Accelerator." The 16-core Neural Engine can process up to 38 trillion operations per second, making it 60 percent faster than the Neural Engine in the M4. The CPU is also up to 15 percent quicker at multi-threaded processing.

The chip uses a third-generation 3 nm manufacturing process, offers 153 GB/s of shared memory bandwidth (about 30 percent more than before), and supports up to 32 GB of unified memory. This allows larger AI models to run directly on the device. Apple expects that existing AI apps and tools like Apple Intelligence will see faster performance. Developers can access the AI hardware using Core ML, Metal 4, and Tensor APIs.

Ad
Ad
Google News