Hub AI in practice
Artificial Intelligence is present in everyday life – from “googling” to facial recognition to vacuum cleaner robots. AI tools are becoming more and more elaborate and support people and companies more effectively in their tasks, such as generating graphics, texting or coding, or interpreting large amounts of data.
What AI tools are there, how do they work, how do they help in our everyday world – and how do they change our lives? These are the questions we address in our Content Hub Artificial Intelligence in Practice.
Hugging Face has launched HuggingChat Omni, an AI router that selects the best open source model for each user prompt from more than 100 available models. The system automatically chooses the fastest, cheapest, or most suitable model for each task, using an approach similar to the new GPT-5 router. Supported models include gpt-oss, qwen, deepseek, kimi, and smolLM.

Hugging Face co-founder Clément Delangue says HuggingChat Omni is just the beginning. The platform already offers access to more than two million open models, spanning not only text, but also images, audio, video, biology, chemistry, time series, and more.
The routing system is built on Arch-Router-1.5B from Katanemo, a lightweight 1.5 billion-parameter model that classifies queries by topic and action. Arch-Router claims it outperforms other models at matching human preferences and is fully open source. Details are available in the research paper on arXiv.
OpenAI has paused the use of Martin Luther King Jr.’s likeness in its Sora video tool after users created offensive content, including a clip showing him making monkey noises at a lectern. The King estate called for the ban.
OpenAI says it will revise its rules for deepfakes—called "cameos" in Sora. Although the company cites "strong free speech interests in depicting historical figures," it now says families and authorized groups should control how public figures are portrayed. Authorized representatives or estate owners can request removal from the tool.
OpenAI expects to cut hardware costs by 20 to 30 percent through a joint chip development program with Broadcom, according to a Bloomberg report citing a person familiar with the company’s plans. The custom chips are scheduled to roll out by late 2026 and are part of a multibillion-dollar project valued at several tens of billions of dollars.
Typically, OpenAI budgets around $50 billion to build a 1-gigawatt data center, with roughly $35 billion spent on advanced chips. The new partnership aims to reduce those chip costs significantly.
But the plan comes with major risks. Developing custom silicon requires billions in investment, specialized technical expertise, and multiple design cycles. The rapid pace of AI innovation also increases the risk that the chips could become obsolete quickly. "There’s a steep learning curve," said semiconductor analyst Cody Acree from Benchmark.
With version 1.5, Manus is introducing its most capable AI agent system so far. The updated architecture cuts task processing times from 15 minutes to just 4, and internal tests report a 15 percent jump in output quality. Manus claims the agent can now build, test, and refine full web applications—including backend, user management, and database—directly on the platform. It also handles research, image generation, and analysis.
"We didn’t create a 'website builder' feature. We enhanced the core Manus platform to master a new, complex domain," Manus co-founder Tao Zhang says.
Both Manus-1.5 and Manus-1.5-Lite are launching as new releases. The Lite version is open to everyone, while the full version is reserved for subscribers. Microsoft has also integrated Manus into its agent suite for Windows.
OpenAI is rolling out two updates. ChatGPT can now automatically manage its memory, so users no longer need to delete saved information by hand. For Sora, Pro users can now use storyboards on the web, and video generation limits have increased: all users can make videos up to 15 seconds long on the app and web, while Pro users can create videos up to 25 seconds on the web.
Anthropic's Claude now integrates directly with Microsoft 365, letting users pull in content from Outlook, SharePoint, OneDrive, and Teams. This means emails, calendars, and documents can show up right in a Claude chat, making it easier to bring company data into the conversation.
Anthropic is also adding a centralized search that covers company resources like HR documents and team guidelines. These features are available to Claude Team and Enterprise users, but admins have to enable them first.
The rollout follows Microsoft's recent Copilot integration in Windows, showing just how closely the competition for AI in the workplace is tied to the digital office ecosystem.
According to two people familiar with the matter, AI startup Anthropic expects to nearly triple its annual revenue by 2026, Reuters reports. The company projects an annualized revenue of $9 billion by the end of 2025 and is targeting more than $20 billion in 2026 under its base scenario, or as much as $26 billion in the most optimistic case.
Anthropic told Reuters that its annualized revenue stood at around $7 billion as of October. Most of the company's growth comes from enterprise clients, who account for roughly 80 percent of its revenue. The startup was recently valued at $183 billion after raising $13 billion in new funding.