Ad
Short

Google introduces Gemini Enterprise, its answer to Microsoft Copilot and ChatGPT Enterprise. The new platform gives companies a central hub to create, manage, and deploy AI agents across existing workflows—no coding required. Employees can chat with Gemini to look up information, analyze data, or automate routine tasks. Out of the box, Google offers its own agents like Deep Research and Code Assist, but companies can also bring in their own or third-party agents.

Gemini Enterprise connects with data from Google Workspace, Microsoft 365, Salesforce, SAP, and BigQuery. There are two plans: "Gemini Business," starting at $21 per user per month for smaller teams, and "Gemini Enterprise Standard/Plus," starting at $30 with extra features for larger organizations.

Ad
Ad
Short

Google is expanding its AI-powered search mode to more than 35 new languages and over 40 additional countries and regions, including much of Europe. The global rollout is expected to finish within a week, bringing the feature to more than 200 countries worldwide. In this mode, users tend to ask much longer and more complex questions than in traditional searches, making the experience feel more like ChatGPT than classic Google Search.

While familiar risks like potential hallucinations remain, the new mode promises a noticeably improved search experience with more precise answers and less SEO clutter. But for the broader web ecosystem, this marks a troubling shift: Google is evolving into an “omni-publisher” that keeps users inside its own platform. As links receive fewer clicks, publishers are left struggling with declining traffic and ad revenue.

Short

Elevenlabs has released ElevenLabs UI, an open-source library featuring 22 components designed for speech and audio applications. According to the company, the toolkit makes it easy to build user interfaces for chatbots, transcription tools, music projects, or voice agents. All components are fully customizable and distributed under the MIT license, based on the shadcn/ui framework.

Examples include "transcriber-01," a dictation module for web apps, and "voice-chat-03," a chat interface with built-in state management. Additional modules like audio players, conversation bars, and interactive visualizations are also available on the project website.

Developers can freely use, modify, and integrate the source code into their own projects.

Ad
Ad
Short

Microsoft unveiled several new multimodal AI models for Azure AI Foundry at OpenAI DevDay in October 2025. The update includes GPT-image-1-mini, GPT-realtime-mini, and GPT-audio-mini, along with security improvements for GPT-5-chat-latest and the analytics model GPT-5-pro. The new models are designed to help developers build AI applications for text, image, audio, and video faster and at lower cost.

The Microsoft Agent Framework, an open-source SDK for coordinating multiple AI agents, is now available, as is OpenAI's new Agent SDK.

Ad
Ad
Short

OpenAI is adding new controls to its Sora video app. According to Sora head Bill Peebles, users can now decide where AI-generated versions of themselves can appear - for example, blocking political content or banning certain words. Users can also set style guidelines for their digital likeness. These updates come in response to criticism over abusive deepfakes on the platform. Peebles also announced that Sora will soon officially support cameos featuring copyrighted characters. Recently, CEO Sam Altman said rights holders should have "more control" and will soon receive a share of Sora's revenue.

Google News