Ad
Short

Meta has rolled out "Vibes," a new short-form video feed in the Meta AI app featuring AI-generated clips. Users can create original videos, remix existing ones, or simply scroll through the feed. For now, Meta is partnering with Midjourney and Black Forest Labs, with its own video models still in development.

Meta's "Vibes" feed lets users generate, remix, and browse short AI-created videos. | Video: Meta

Meta is also exploring outside AI tools for other products. The company is reportedly in talks with Google to use fine-tuned Gemini models to improve ad targeting. The move highlights the current limits of Meta's own AI ambitions.

Ad
Ad
Short

Lionsgate's AI deal with Runway is moving slower than planned. The studio wanted to use its film library to train an AI model for movie production, but TheWrap reports the data just isn't enough—even Disney's catalog falls short, according to an insider.

Image: via The Wrap

This challenges a major industry belief: even after decades of making movies, big Hollywood studios don't have enough diverse, large-scale, fully licensed footage to train a top AI video model on their own. Legal roadblocks like actor image rights are also slowing things down. Lionsgate says it's still pursuing AI projects with other partners. Runway had no comment.

The situation also raises questions about where companies like Runway, Google, or OpenAI get the huge video datasets needed for their leading models. So far, they're not saying.

Short

Alibaba has released Qwen3-VL, an open-source language vision model that works with both images and text. The top version, Qwen3-VL-235B-A22B, is available in two variants: "Instruct," which Alibaba reports outperforms Google's Gemini 2.5 Pro on major vision benchmarks, and "Thinking," which scores highly on multimodal reasoning tasks. Detailed benchmark results are available in Alibaba's technical blog.

Qwen3-VL can interact with graphical interfaces, generate code from screenshots, analyze videos up to two hours long, and recognize text in 32 languages, even when image quality is low. The model supports 2D and 3D spatial understanding and is designed to handle math and science tasks.

Video: Qwen3-VL demo shows agentic image processing.

Qwen3-VL is available on Hugging Face, ModelScope, and Alibaba Cloud. Public chat access is available at chat.qwen.ai.

Short

Microsoft is developing a pilot project called the "Publisher Content Marketplace" (PCM), Axios reports. The platform would let publishers sell their content to AI products like Microsoft Copilot. Microsoft presented the idea to a group of US publishers at a partner meeting in Monaco last week, where one slide read, "You deserve to be paid on the quality of your IP." There's no confirmed launch date for the PCM pilot yet. Over time, the marketplace could open up to more partners and other AI buyers.

Up to now, most large AI companies have paid flat license fees instead of royalties based on usage. Microsoft already partners with outlets like Reuters and Axel Springer for Copilot. OpenAI and other AI companies also pay license fees to a select group of publishers—a practice journalism researcher Jeff Jarvis has called "pure lobbying."

Ad
Ad
Ad
Ad
Short

OpenAI is hiring a manager to lead advertising efforts for ChatGPT. According to Alex Heath, Fidji Simo, who has served as CEO of applications at OpenAI since August, is meeting with candidates, including some from her former Facebook network. The new position will oversee all monetization strategies for ChatGPT, including ad products and subscription models, and will report directly to Simo. Simo now manages all areas of OpenAI except for research, infrastructure, hardware, and security, which remain under Sam Altman's leadership.

ChatGPT ads have been expected for some time. Earlier leaks revealed that OpenAI plans to generate tens of billions in revenue from free users by 2030.

Google News