Ad
Skip to content

Google's Gemini models add native video understanding

Google has integrated native video understanding into its Gemini models, enabling users to analyze YouTube content through Google AI Studio. Simply enter a YouTube video link into your prompt. The system then transcribes the audio and analyzes the video frames at one-second intervals. You can, for example, reference specific timestamps and extract summaries, translations, or visual descriptions. Currently in preview, the feature permits processing up to 8 hours of video per day, with limitations of one public video per request. Gemini Pro processes videos up to two hours in length, while Gemini Flash handles videos up to one hour. The update follows the implementation of native image generation in Gemini.

Video: via Logan Kilpatrick

Ad
DEC_D_Incontent-1

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.