Ad
Skip to content
Read full article about: Cerebras closes $1 billion funding round at $23 billion valuation after landing OpenAI deal

AI chip startup Cerebras Systems has closed a financing round of over one billion dollars. The funding values the company at around 23 billion dollars, according to a press release. Tiger Global led the round, with Benchmark, Fidelity, AMD, Coatue, and other investors participating.

Cerebras, based in Sunnyvale, California, builds specialized AI chips for fast inference - the speed at which AI models generate responses. The company's approach uses an entire wafer as a single chip, called the "Wafer Scale Engine" (WSE). Its current flagship is the WSE-3.

The recently announced deal with OpenAI, worth over ten billion dollars, likely helped attract investors. The AI lab plans to acquire 750 megawatts of computing capacity for ChatGPT over three years to speed up response times for its reasoning and code models. OpenAI is reportedly unhappy with Nvidia's inference speeds. Sam Altman recently promised "dramatically faster" responses when discussing the Codex code model—a promise likely tied to the Cerebras deal.

Read full article about: Chinese AI video model Kling 3.0 takes another step toward usable creative assets

Chinese company Kling has released video model 3.0. The new model is described as an "all-in-one creative engine" for multimodal creation. Key features include improved consistency for characters and elements, video production with 15-second clips and better control, and customizable multi-shot recording. Audio features now support multiple character references along with additional languages and accents. For image generation, Kling 3.0 offers 4K output, a new continuous shooting mode, and what the company calls "more cinematic visuals."

Ultra subscribers get exclusive early access through the Kling AI website. Official details on a general release, API access, or technical documentation aren't available yet. The Kling team published a paper on the Kling Omni models in December 2025. The YouTube channel "Theoretically Media" got early access and published a detailed first impression video. According to the channel, the model should roll out to other subscription levels within a week.

Ad
Read full article about: Anthropic pledges to keep Claude ad-free while OpenAI moves forward with ChatGPT advertising

Anthropic positions itself against advertising while exploring commercial chat transactions. In a blog post, the company says Claude will remain ad-free: no sponsored links, no advertiser-influenced responses. Unlike search engines, users often share personal information in AI chats, and advertising could push conversations toward transactions rather than helpfulness, concerns OpenAI CEO Sam Altman once shared before his company decided to pursue ads after all.

Expanding access to Claude is central to our public benefit mission, and we want to do it without selling our users’ attention or data to advertisers.

Instead, Anthropic plans to fund operations through enterprise contracts and subscriptions. The company is also exploring e-commerce transactions like bookings or purchases Claude handles for users. Anthropic could earn from these, similar to OpenAI's plans. However, the company says Claude's primary goal should always be providing helpful answers.

Anthropic's statement comes shortly after OpenAI revealed its ChatGPT advertising plans. The company even produced a video series poking fun at ChatGPT ads.

Read full article about: Alibaba's Qwen3-Coder-Next delivers solid coding performance in a compact package

Alibaba has released Qwen3-Coder-Next, a new open-weight AI model for programming agents and local development. Trained on 800,000 verifiable tasks, the model has 80 billion parameters total but only 3 billion active at any time. Despite this small footprint, Alibaba says it outperforms or matches much larger open-source models on coding benchmarks, scoring above 70 percent on SWE-Bench Verified with the SWE-Agent framework. As always, benchmarks only indicate real-world performance.

Performance on Coding Agent Benchmarks
Qwen3-Coder-Next competes with much larger models across multiple coding benchmarks while using only 3 billion active parameters. | Image: Qwen

The model supports 256,000 tokens of context and works with development environments like Claude Code, Qwen Code, Qoder, Kilo, Trae, and Cline. Local tools like Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers also support it. Qwen3-Coder-Next is available on Hugging Face and ModelScope under the Apache 2.0 license. More details in the blog post and technical report.

Ad
Read full article about: OpenAI hires Anthropic's Dylan Scandinaro to lead AI safety as "extremely powerful models" loom

OpenAI has filled its "Head of Preparedness" position with Dylan Scandinaro, who previously worked on AI safety at competitor Anthropic. CEO Sam Altman announced the hire on X, calling Scandinaro "by far the best candidate" for the role. With OpenAI working on "extremely powerful models," Altman said strong safety measures are essential.

In his own post, Scandinaro acknowledged the technology's major potential benefits but "risks of extreme and even irrecoverable harm." OpenAI recently disclosed that a new coding model received a "high" risk rating in cybersecurity evaluations.

There’s a lot of work to do, and not much time to do it!

Dylan Scandinaro

Scandinaro's Anthropic background adds an interesting layer. The company was founded by former OpenAI employees concerned about OpenAI's product focus and what they saw as insufficient safety measures, and has since become known as one of the more safety-conscious AI developers. Altman says he plans to work with Scandinaro to implement changes across the company.

A new platform lets AI agents pay humans to do the real-world work they can't

On Rentahuman.ai, AI agents can hire people for real-world tasks, from holding signs to picking up packages. It sounds absurd, but it shows what happens when language models stop just talking and start taking action.

Read full article about: Anthropic partners with leading research institutes to tackle biology's data bottleneck

Anthropic has announced two partnerships with major US research institutions to develop AI agents for biological research. The Allen Institute and the Howard Hughes Medical Institute (HHMI) will serve as founding partners in the initiative. According to Anthropic, "modern biological research generates data at unprecedented scale," but turning it into "validated biological insights remains a fundamental bottleneck." The company says manual processes "can't keep pace with the data being produced."

HHMI will develop specialized AI agents at the Janelia Research Campus that connect experimental knowledge to scientific instruments and analysis pipelines. The Allen Institute is working on multi-agent systems for data integration and experiment design that could "compress months of manual analysis into hours." According to Anthropic, these systems "are designed to amplify scientific intuition rather than replace it, keeping researchers in control of scientific direction while handling computational complexity."

The move extends Anthropic's push into scientific applications. The company recently launched Cowork, a feature designed for office work that gives Claude access to local files. OpenAI is also targeting the research market with Prism, an AI workspace for scientific writing.

Ad