Ad
Skip to content
Read full article about: Perplexity signs $750 million deal with Microsoft

AI search startup Perplexity has inked a $750 million contract with Microsoft to use its Azure cloud service. Bloomberg reports, citing people familiar with the matter, that the three-year deal gives Perplexity access to various AI models through Microsoft's Foundry program, including systems from OpenAI, Anthropic, and xAI.

A Microsoft spokesperson confirmed to Reuters that Perplexity has chosen Microsoft Foundry as its primary platform for AI models, with a Perplexity spokesperson telling Bloomberg the partnership provides access to leading models from X, OpenAI, and Anthropic.

Amazon Web Services remains the startup's main cloud provider, but last year may have strained that relationship: AWS parent company Amazon sued Perplexity over a shopping feature that automatically places orders for users.

Read full article about: Anthropic's Cowork gets plugins that turn Claude into a specialized assistant for knowledge workers

Anthropic has launched plugins for Cowork that turn Claude into a specialized assistant for sales, legal, finance, and other departments. Each plugin bundles skills, data connections, commands, and sub-agents. A sales plugin, for instance, hooks Claude into the company's CRM and knowledge base while adding commands for customer research and call follow-up.

The Cowork interface showing the plugin menu. | Image: Anthropic

Anthropic has open-sourced eleven plugins covering productivity, data analysis, marketing, and customer service. All components are stored as simple files, which the company says makes them easy to build and share via the Cowork interface or GitHub.

The plugin admin panel lets users organize skills, commands, agents, and connectors for different departments like sales or marketing. | Image: Anthropic

Plugin support is available as a research preview for paying Claude users. Plugins are stored locally for now, with company-wide management coming later. Cowork is Anthropic's desktop software for agentic knowledge work, though it still has fundamental cybersecurity issues.

OpenAI develops six-layer context system to help employees navigate 600 petabytes of data

OpenAI has developed an internal AI data agent that lets employees run complex data analyses using natural language. A key technique called “Codex Enrichment” crawls the codebase to understand what tables actually contain.

ChatGPT Agent reportedly lost 75% of its users because nobody knew what it was actually for

OpenAI may shelve ChatGPT Agent just months after launch. Users dropped from four million to under one million, plagued by technical issues and unclear purpose: many didn’t know what to use it for or that it even existed. The branding didn’t help either, suggesting only this mode was agentic when ChatGPT already had agent capabilities.

Read full article about: Google Deepmind opens Project Genie to US Gemini subscribers for real-time AI world generation

Google Deepmind has made Project Genie publicly available. The experimental prototype, based on the Genie 3 world model shown in August, is now accessible to Google AI Ultra subscribers in the US who are 18 or older.

The web app lets users create interactive worlds using text or images and explore them in real time. The system generates the environment as you move through it. Project Genie offers three main features: World Sketching for creating worlds with Nano Banana Pro and Gemini, World Exploration for moving through them, and World Remixing for changing existing worlds.

Google says the prototype still has issues: worlds don't always look realistic, characters sometimes respond slowly, and sessions are limited to 60 seconds. Some features announced in August, like promptable events, are still missing. Google plans to expand to other countries later.

The long-term goal of such world models is to serve as training environments for AI agents, allowing them to learn from simulated experiences instead of relying solely on pre-collected data.

Read full article about: OpenAI clarifies it won't claim ownership of user discoveries following confusion over monetization plans

OpenAI researcher Kevin Weil pushes back on reports that the company plans to claim a share of discoveries made by individual users, entrepreneurs, or scientists. The clarification follows a blog post by CFO Sarah Friar outlining plans for IP licensing agreements and outcome-based pricing that would let OpenAI share in the value its tools help create.

Licensing, IP-based agreements, and outcome-based pricing will share in the value created.

Sarah Weil, via OpenAI

Weil clarified on X that Friar was referring to interest OpenAI has heard from large organizations in licensing or IP-based partnerships. The company is open to exploring creative ways to partner and align incentives, but "that's not something we're doing today." If it happens in the future, it would be a bespoke agreement with a company, "not something that would impact individual users," Weil says.

Nearly half of Microsoft's commercial contract backlog is tied to OpenAI

Microsoft posts record cloud revenue but the stock is down double digits. Investors question whether billions in AI spending will pay off, especially with nearly half the cloud backlog coming from one customer: OpenAI.

Read full article about: Nvidia, Amazon, and Microsoft could invest up to $60 billion in OpenAI

OpenAI's latest funding round might hit peak circularity. According to The Information, the AI company is in talks with Nvidia, Microsoft, and Amazon about investments totaling up to $60 billion. Nvidia could put in as much as $30 billion, Amazon more than $10 billion—possibly even north of $20 billion—and Microsoft less than $10 billion. On top of that, existing investor SoftBank could contribute up to $30 billion. If these deals go through, the funding round could reach the previously rumored $100 billion mark at a valuation of around $730 billion.

Critics will likely point out how circular these deals really are. Several potential investors, including Microsoft and Amazon, also sell servers and cloud services to OpenAI. That means a chunk of the investment money flows right back to the investors themselves. These arrangements keep the AI hype machine running without the actual financial benefits of generative AI showing up in what end users pay.

Read full article about: Cursor slashes codebase indexing from four hours to 21 seconds

AI coding assistant Cursor now indexes large codebases in 21 seconds instead of over four hours. The trick: instead of building an index from scratch for each new user, Cursor reuses existing indices from team members. According to the company's blog post, copies of the same codebase within a team are 92 percent identical on average, making this approach highly efficient.

Diagramm: Merkle-Bäume vergleichen Dateihashes von Client und Server, synchronisieren nur unterschiedliche Einträge und löschen fehlende Dateien.
Merkle trees compare file hashes between client and repository, only synchronize files that differ and delete missing entries.

A Cursor study found that the semantic search enabled by these indices improves AI response accuracy by 12.5 percent. The technology relies on Merkle trees - a data structure using cryptographic hashes - to ensure users only see code they're authorized to access. For typical projects, wait times for the first search query drop from nearly 8 seconds to just 525 milliseconds. The startup behind Cursor shipped version 2.0 with its own coding model in October 2025 and now generates around $500 million in annual revenue.