Ad
Skip to content
Read full article about: China greenlights 400,000 Nvidia H200 chip imports for tech giants, according to Reuters

China has authorized ByteDance, Alibaba, and Tencent to purchase Nvidia's H200 AI chips, Reuters reports, citing four people familiar with the matter. The three tech giants can import more than 400,000 H200 chips combined. Additional companies are on a waiting list for future approvals.

The approval came during Nvidia CEO Jensen Huang's visit to China. Huang arrived in Shanghai last Friday and has since traveled to Beijing and other cities. The Chinese government is attaching conditions to the approvals that are still being finalized. A fifth source told Reuters the licenses are too restrictive, and customers aren't converting approvals into orders yet. Beijing has previously discussed requiring companies to buy a certain quota of domestic chips before they can import foreign semiconductors.

The H200 is Nvidia's second most powerful AI chip, delivering roughly six times the performance of the H20. Chinese companies have ordered more than two million H200 chips, according to Reuters - far more than Nvidia can deliver. Beijing had previously held off on allowing imports to support its domestic chip industry. The U.S. approved exports in early January.

Read full article about: Decart's Lucy 2.0 transforms live video in real time using text prompts

AI startup Decart has unveiled Lucy 2.0, a real-time video transformation model. The system can modify live video at 30 frames per second in 1080p resolution with near-zero latency. Users can swap characters, place products, change clothing, and completely transform environments - all controlled through text commands and reference images while the video is still running.

According to Decart, Lucy 2.0 doesn't rely on depth maps or 3D models. Instead, the system's understanding of physics comes entirely from patterns learned during video training. A new technique called "Smart History Augmentation" prevents image quality from degrading over time, letting the model run stably for hours, the startup says.

The technology runs on AWS Trainium3 chips. A demo is available at lucy.decart.ai.

Read full article about: OpenAI's Prism combines LaTeX editor, reference manager, and GPT-5.2 in one tool

OpenAI has launched Prism, a free AI workspace for scientific writing. The tool runs on GPT-5.2 and combines a LaTeX editor, reference manager, and AI assistant in a cloud-based environment. Researchers can create unlimited projects and invite collaborators.

The AI has access to the entire document and can help with writing, editing, and structuring. Users can search and incorporate academic literature from sources like arXiv. Whiteboard sketches or handwritten equations can be converted directly to LaTeX via image upload. Real-time collaboration with co-authors is also supported.

Prism is based on Crixet, a LaTeX platform that OpenAI acquired. The tool aims to eliminate the need to switch between different programs like editors, PDFs, and reference managers. Prism is available now for anyone with a ChatGPT account at prism.openai.com. Availability for Business and Enterprise plans will follow later.

Moonshot AI releases Kimi K2.5, claims most powerful open-weight model with 100-agent coordination

Moonshot AI has released Kimi K2.5, which the company says is the most powerful open-weight model available. The model can independently coordinate up to 100 AI agents working in parallel on complex tasks.

Read full article about: Allen AI's SERA brings open coding agents to private repos for as little as $400 in training costs

AI research institute Allen AI has released SERA, a family of open-source coding agents designed for easy adaptation to private code bases. The top model, SERA-32B, solves up to 54.2 percent of problems in the SWE-Bench-Test Verified coding benchmark (64K context), outperforming comparable open-source models.

Allen AI
SERA outperforms comparable open-source coding agents on the SWE-Bench-Test Verified benchmark with 32K context. | Image: Allen AI

According to AI2, training takes just 40 GPU days and costs between $400 to match previous open-source results and $12,000 for performance on par with leading industry models. This makes training on proprietary code data realistic even for small teams. SERA uses a simplified training method called "Soft-verified Generation" that doesn't require perfectly correct code examples. Technical details can be found in the blog.

The models work with Claude Code and can be launched with just two lines of code, according to Allen AI. All models, code, and instructions are available on Hugging Face under the Apache 2.0 license.

Read full article about: Mistral AI launches terminal-based coding agent Vibe 2.0

Mistral AI has unveiled Mistral Vibe 2.0, an upgrade to its terminal-based coding agent powered by the Devstral 2 model. The tool enables developers to control code using natural language, orchestrate multiple files simultaneously, and leverage full codebase context.

New in version 2.0 are custom subagents for specific tasks like testing or code reviews, clarifying questions when instructions are ambiguous instead of automatic decisions, and slash commands for preconfigured workflows.

Mistral Vibe is available through Le Chat Pro ($14.99/month) and Team plans ($24.99/seat). Devstral 2 moves to paid API access – free usage remains available for testing on the Experiment plan. For enterprises, Mistral additionally offers fine-tuning, reinforcement learning, and code modernization services.