Google's WebMCP moves the web closer to becoming a structured database for AI agents
In the future, AI agents won’t just search the web; they’ll browse it, shop on it, and complete tasks on their own. At least that’s Big AI’s vision, and Google’s WebMCP wants to turn websites into standardized interfaces for these agents. For website operators who depend on human visitors, that could be a serious problem.
xAI has lost half of its founders in recent weeks and months.Elon Musk said on X that some departures were part of a restructuring where "unfortunately we had to part with some people" to "improve speed of execution."
But former employees tell a different story. One ex-employee told The Verge that many people at the company had grown disillusioned with Grok's focus on NSFW content and its lack of safety standards. A second former employee backed that up: "There is zero safety whatsoever in the company." According to the source, Musk deliberately pushed to make the model less restricted, viewing safety measures as censorship. Among other things, Grok had generated sexualized images of children.
You survive by shutting up and doing what Elon wants.
Anthropic recruits ex-Google data center veterans to build its own AI infrastructure empire
Anthropic is discussing building at least 10 gigawatts of data center capacity worth hundreds of billions of dollars, recruiting ex-Google managers and lining up Google as a financial backer to make it happen.
OpenAI has yet another new coding model and this time it's really fast
OpenAI’s new GPT-5.3-Codex-Spark is a smaller coding model that runs on Cerebras chips and pushes over 1,000 tokens per second. It’s the company’s first model built specifically for real-time programming.
OpenAI is dropping several older AI models from ChatGPT on February 13, 2026: GPT-4o, GPT-4.1, GPT-4.1 mini, and o4-mini. The models will stick around in the API for now. The company says it comes down to usage: only 0.1 percent of users still pick GPT-4o on any given day.
We know that losing access to GPT‑4o will feel frustrating for some users, and we didn’t make this decision lightly. Retiring models is never easy, but it allows us to focus on improving the models most people use today.
Google Deepmind has upgraded its specialized thinking mode "Gemini 3 Deep Think" and made it available through the Gemini app and as an API via a Vertex AI early access program. The upgrade targets complex tasks in science, research, and engineering.
Google AI Ultra subscribers can access Deep Think through the Gemini app, while developers and researchers can sign up separately for the API program. According to Google Deepmind, the model tops several major benchmarks:
Benchmark
Deep Think
Claude Opus 4.6
GPT-5.2
Gemini 3 Pro Preview
ARC-AGI-2 (Logical reasoning)
84.6%
68.8%
52.9%
31.1%
Humanity's Last Exam (Academic reasoning)
48.4%
40.0%
34.5%
37.5%
MMMU-Pro (Multimodal reasoning)
81.5%
73.9%
79.5%
81.0%
Codeforces (Coding/algorithms, Elo)
3,455
2,352
-
2,512
While Deep Think dominates in logic and coding, the gap narrows significantly on MMMU-Pro: it scored 81.5 percent, barely ahead of Gemini 3 Pro Preview at 81.0 percent. This suggests the thinking upgrades focus heavily on abstract reasoning rather than visual processing. Deep Think also achieved gold medal-level results at the 2025 Physics and Chemistry Olympiads. Examples of the model in scientific use can be found here.
OpenAI uses a "special version" of ChatGPT to track down internal information leaks. That's according to a report from The Information, citing a person familiar with the matter. When a news article about internal operations surfaces, OpenAI's security team feeds the text into this custom ChatGPT version, which has access to internal documents as well as employees' Slack and email messages.
The system then suggests possible sources of the leak by identifying files or communication channels that contain the published information and showing who had access to them. It's unclear whether OpenAI has actually caught anyone using this method.
What exactly makes this version special isn't known, but there's a clue: OpenAI engineers recently presented the architecture of an internal AI agent that could serve this purpose. It's designed to let employees run complex data analysis using natural language and has access to institutional knowledge stored in Slack messages, Google Docs, and more.
OpenAI researcher quit over ads because she doesn't trust her former employer to keep its own promises
OpenAI wants to put ads in ChatGPT and former researcher Zoe Hitzig says that’s a dangerous move. She spent two years at the company and doesn’t believe OpenAI can resist the temptation to exploit its users’ most personal conversations.
OpenAI is adding new capabilities to its Responses API that are built specifically for long-running AI agents. The update brings three major features: server-side compression that keeps agent sessions going for hours without blowing past context limits, controlled internet access for OpenAI-hosted containers so they can install libraries and run scripts, and "skills": reusable bundles of instructions, scripts, and files that agents can pull in and execute on demand.
Skills work as a middle layer between system prompts and tools. Instead of stuffing long workflows into every prompt, developers can package them as versioned bundles that only kick in when needed. They ship as ZIP files, support versioning, and work in both hosted and local containers through the API. OpenAI recommends building skills like small command-line programs and pinning specific versions in production.