Ad
Skip to content

Matthias Bastian

Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Read full article about: Google's Gemini models add native video understanding

Google has integrated native video understanding into its Gemini models, enabling users to analyze YouTube content through Google AI Studio. Simply enter a YouTube video link into your prompt. The system then transcribes the audio and analyzes the video frames at one-second intervals. You can, for example, reference specific timestamps and extract summaries, translations, or visual descriptions. Currently in preview, the feature permits processing up to 8 hours of video per day, with limitations of one public video per request. Gemini Pro processes videos up to two hours in length, while Gemini Flash handles videos up to one hour. The update follows the implementation of native image generation in Gemini.

Video: via Logan Kilpatrick