Short

Mistral AI has released an updated version of its Small 3 model, now available as Small 3.1 under the Apache 2.0 license. The latest iteration brings enhanced text processing abilities, multimodal understanding, and an expanded context window that can handle up to 128,000 tokens. According to Mistral AI, Small 3.1 delivers better performance than similar models like Google Gemma 3 and GPT-4o-mini, achieving inference speeds of 150 tokens per second. The model runs on consumer hardware like a single RTX 4090 graphics card or a Mac with 32 GB of RAM. You can download Mistral Small 3.1 via Huggingface as a base and instruct version, or use it through the Mistral AI API and on Google Cloud Vertex AI. The company plans to expand availability to NVIDIA NIM and Microsoft Azure AI Foundry platforms in the coming weeks.

Image: Mistral AI
Ad
Short

Google has integrated native video understanding into its Gemini models, enabling users to analyze YouTube content through Google AI Studio. Simply enter a YouTube video link into your prompt. The system then transcribes the audio and analyzes the video frames at one-second intervals. You can, for example, reference specific timestamps and extract summaries, translations, or visual descriptions. Currently in preview, the feature permits processing up to 8 hours of video per day, with limitations of one public video per request. Gemini Pro processes videos up to two hours in length, while Gemini Flash handles videos up to one hour. The update follows the implementation of native image generation in Gemini.

Video: via Logan Kilpatrick

Ad
Ad
Google News