Ad
Skip to content
Read full article about: Nvidia, Amazon, and Microsoft could invest up to $60 billion in OpenAI

OpenAI's latest funding round might hit peak circularity. According to The Information, the AI company is in talks with Nvidia, Microsoft, and Amazon about investments totaling up to $60 billion. Nvidia could put in as much as $30 billion, Amazon more than $10 billion—possibly even north of $20 billion—and Microsoft less than $10 billion. On top of that, existing investor SoftBank could contribute up to $30 billion. If these deals go through, the funding round could reach the previously rumored $100 billion mark at a valuation of around $730 billion.

Critics will likely point out how circular these deals really are. Several potential investors, including Microsoft and Amazon, also sell servers and cloud services to OpenAI. That means a chunk of the investment money flows right back to the investors themselves. These arrangements keep the AI hype machine running without the actual financial benefits of generative AI showing up in what end users pay.

Read full article about: Cursor slashes codebase indexing from four hours to 21 seconds

AI coding assistant Cursor now indexes large codebases in 21 seconds instead of over four hours. The trick: instead of building an index from scratch for each new user, Cursor reuses existing indices from team members. According to the company's blog post, copies of the same codebase within a team are 92 percent identical on average, making this approach highly efficient.

Diagramm: Merkle-Bäume vergleichen Dateihashes von Client und Server, synchronisieren nur unterschiedliche Einträge und löschen fehlende Dateien.
Merkle trees compare file hashes between client and repository, only synchronize files that differ and delete missing entries.

A Cursor study found that the semantic search enabled by these indices improves AI response accuracy by 12.5 percent. The technology relies on Merkle trees - a data structure using cryptographic hashes - to ensure users only see code they're authorized to access. For typical projects, wait times for the first search query drop from nearly 8 seconds to just 525 milliseconds. The startup behind Cursor shipped version 2.0 with its own coding model in October 2025 and now generates around $500 million in annual revenue.

Read full article about: Google appears to be preparing voice cloning for Gemini 3 Flash

Google is working on a feature that lets users clone their own voice in AI Studio. According to TestingCatalog, a hidden option called "Create Your Voice" shows up when selecting the "Flash Native Audio Preview" model, which is currently tied to Gemini 2.5 Flash. Clicking it opens a window for recording and uploading audio, but the feature isn't functional yet. The discovery suggests Google is getting ready to ship native audio capabilities with Gemini 3 Flash. This would let developers create artificial voices based on recorded voice samples. Google released an update for Gemini 2.5 Flash Native Audio back in December 2025 that improved voice quality and made the model follow instructions more precisely.

Screenshot von Google AI Studio im Playground-Modus. Rechts in der Seitenleiste ist unter der Stimmauswahl "Zephyr" ein Button mit der Aufschrift "Create your voice" zu sehen, auf den ein roter Pfeil zeigt. Oben rechts steht die Modellbezeichnung Gemini 2.5 Flash Native Audio Preview.
The hidden "Create your voice" option in Google AI Studio hints at upcoming voice cloning functions.

In addition, a new option has been found that allows entire code collections to be imported via GitHub repositories. The start page is also apparently being revised and will display activities and usage statistics separately in future.

Read full article about: China greenlights 400,000 Nvidia H200 chip imports for tech giants, according to Reuters

China has authorized ByteDance, Alibaba, and Tencent to purchase Nvidia's H200 AI chips, Reuters reports, citing four people familiar with the matter. The three tech giants can import more than 400,000 H200 chips combined. Additional companies are on a waiting list for future approvals.

The approval came during Nvidia CEO Jensen Huang's visit to China. Huang arrived in Shanghai last Friday and has since traveled to Beijing and other cities. The Chinese government is attaching conditions to the approvals that are still being finalized. A fifth source told Reuters the licenses are too restrictive, and customers aren't converting approvals into orders yet. Beijing has previously discussed requiring companies to buy a certain quota of domestic chips before they can import foreign semiconductors.

The H200 is Nvidia's second most powerful AI chip, delivering roughly six times the performance of the H20. Chinese companies have ordered more than two million H200 chips, according to Reuters - far more than Nvidia can deliver. Beijing had previously held off on allowing imports to support its domestic chip industry. The U.S. approved exports in early January.

Read full article about: Decart's Lucy 2.0 transforms live video in real time using text prompts

AI startup Decart has unveiled Lucy 2.0, a real-time video transformation model. The system can modify live video at 30 frames per second in 1080p resolution with near-zero latency. Users can swap characters, place products, change clothing, and completely transform environments - all controlled through text commands and reference images while the video is still running.

According to Decart, Lucy 2.0 doesn't rely on depth maps or 3D models. Instead, the system's understanding of physics comes entirely from patterns learned during video training. A new technique called "Smart History Augmentation" prevents image quality from degrading over time, letting the model run stably for hours, the startup says.

The technology runs on AWS Trainium3 chips. A demo is available at lucy.decart.ai.

Read full article about: OpenAI's Prism combines LaTeX editor, reference manager, and GPT-5.2 in one tool

OpenAI has launched Prism, a free AI workspace for scientific writing. The tool runs on GPT-5.2 and combines a LaTeX editor, reference manager, and AI assistant in a cloud-based environment. Researchers can create unlimited projects and invite collaborators.

The AI has access to the entire document and can help with writing, editing, and structuring. Users can search and incorporate academic literature from sources like arXiv. Whiteboard sketches or handwritten equations can be converted directly to LaTeX via image upload. Real-time collaboration with co-authors is also supported.

Prism is based on Crixet, a LaTeX platform that OpenAI acquired. The tool aims to eliminate the need to switch between different programs like editors, PDFs, and reference managers. Prism is available now for anyone with a ChatGPT account at prism.openai.com. Availability for Business and Enterprise plans will follow later.

Read full article about: Allen AI's SERA brings open coding agents to private repos for as little as $400 in training costs

AI research institute Allen AI has released SERA, a family of open-source coding agents designed for easy adaptation to private code bases. The top model, SERA-32B, solves up to 54.2 percent of problems in the SWE-Bench-Test Verified coding benchmark (64K context), outperforming comparable open-source models.

Allen AI
SERA outperforms comparable open-source coding agents on the SWE-Bench-Test Verified benchmark with 32K context. | Image: Allen AI

According to AI2, training takes just 40 GPU days and costs between $400 to match previous open-source results and $12,000 for performance on par with leading industry models. This makes training on proprietary code data realistic even for small teams. SERA uses a simplified training method called "Soft-verified Generation" that doesn't require perfectly correct code examples. Technical details can be found in the blog.

The models work with Claude Code and can be launched with just two lines of code, according to Allen AI. All models, code, and instructions are available on Hugging Face under the Apache 2.0 license.

Read full article about: UK government taps Anthropic AI to help citizens find jobs

The British government has chosen Anthropic to develop an AI assistant for the GOV.UK website. The Department for Science, Innovation and Technology (DSIT) plans to use the system to help citizens navigate government services and receive personalized guidance. The initial focus will be on jobseekers - helping them with career advice, connecting them to training opportunities, and explaining available programs.

The partnership builds on a declaration of intent signed in February 2025. Anthropic engineers are collaborating directly with UK officials to ensure the government can eventually run the system on its own. Users will keep full control over their data and can opt out at any time.

Anthropic's regional head Pip White said the collaboration demonstrates how AI can be deployed safely for the public good. The company isn't the only US tech firm making moves in the UK - Microsoft, OpenAI, and Nvidia committed over 31 billion pounds to British AI infrastructure last year.

There's one notable difference between Anthropic and some of its competitors: while OpenAI holds a $200 million contract with the US Department of Defense, Anthropic prohibits US law enforcement agencies from using its models for domestic surveillance.

Read full article about: OpenAI reportedly launches ChatGPT ads at premium TV prices

OpenAI is charging around $60 per 1,000 impressions for its initial ChatGPT ads, far above typical online advertising rates in the low single digits and closer to what advertisers pay for premium TV spots like NFL games, according to The Information. The ads show up below ChatGPT responses in the free and lower-cost "Go" tiers.

OpenAI is also reportedly charging per impression rather than per click. Advertisers typically prefer click-based billing because it's easier to measure results. The decision to go with impressions likely reflects how AI chatbot users behave differently than traditional search users: they click on external links far less often. Perplexity uses the same approach, also charging per 1,000 impressions.

The move toward advertising—at premium prices and in a format that's less appealing to advertisers—suggests OpenAI needs to ramp up revenue quickly to justify its high valuation to investors. Sam Altman previously called ChatGPT advertising a last resort and a potential dystopia.

Read full article about: Microsoft's Maia 200 AI chip claims performance lead over Amazon and Google

Microsoft has unveiled its new AI inference chip, Maia 200. Built specifically for inference workloads, the chip delivers 30 percent better performance per dollar than current-generation chips in Microsoft's data centers, the company claims. It's manufactured using TSMC's 3-nanometer process, packs over 140 billion transistors, and features 216 GB of high-speed memory.

According to Microsoft, the Maia 200 is now the most powerful in-house chip among major cloud providers. The company claims it delivers three times the FP4 performance of Amazon's Trainium 3 while also outperforming Google's TPU v7 in FP8 calculations—though independent benchmarks have yet to verify these figures.

Microsoft
Microsoft's comparison shows the Maia 200 outperforming Amazon's Trainium 3 and Google's TPU v7 across key specifications. | Image: Microsoft

Microsoft says the chip already powers OpenAI's GPT 5.2 models and Microsoft 365 Copilot. Developers interested in trying it out can sign up for a preview of the Maia SDK. The Maia 200 is currently available in Microsoft's Iowa data center, with Arizona coming next. More technical details about the chip are available here.