Pentagon pushes AI companies to deploy unrestricted models on classified military networks
The Pentagon is pressing leading AI companies including OpenAI, Anthropic, Google, and xAI to make their AI tools available on classified military networks – without the usual usage restrictions.
OpenAI uses a "special version" of ChatGPT to track down internal information leaks. That's according to a report from The Information, citing a person familiar with the matter. When a news article about internal operations surfaces, OpenAI's security team feeds the text into this custom ChatGPT version, which has access to internal documents as well as employees' Slack and email messages.
The system then suggests possible sources of the leak by identifying files or communication channels that contain the published information and showing who had access to them. It's unclear whether OpenAI has actually caught anyone using this method.
What exactly makes this version special isn't known, but there's a clue: OpenAI engineers recently presented the architecture of an internal AI agent that could serve this purpose. It's designed to let employees run complex data analysis using natural language and has access to institutional knowledge stored in Slack messages, Google Docs, and more.
OpenAI is adding new capabilities to its Responses API that are built specifically for long-running AI agents. The update brings three major features: server-side compression that keeps agent sessions going for hours without blowing past context limits, controlled internet access for OpenAI-hosted containers so they can install libraries and run scripts, and "skills": reusable bundles of instructions, scripts, and files that agents can pull in and execute on demand.
Skills work as a middle layer between system prompts and tools. Instead of stuffing long workflows into every prompt, developers can package them as versioned bundles that only kick in when needed. They ship as ZIP files, support versioning, and work in both hosted and local containers through the API. OpenAI recommends building skills like small command-line programs and pinning specific versions in production.
Bytedance is in talks with Samsung to produce a custom AI chip, a deal that could also give the TikTok parent company access to hard-to-get memory chips, according to Reuters.
Bytedance is developing its own AI chip for inference tasks under the codename SeedChip and is negotiating with Samsung to manufacture it, Reuters reports. What makes the deal especially interesting: the talks also cover access to memory chip supplies, which are extremely scarce amid the global AI infrastructure buildout - making the arrangement particularly valuable for Bytedance.
The company plans to receive its first sample chips by the end of March and produce at least 100,000 units this year, with a possible ramp-up to 350,000. Bytedance intends to spend more than 160 billion yuan (roughly $22 billion) on AI-related procurement in 2026 - more than half of that going toward Nvidia chips, including H200 models, and development of its own chip.
Bytedance executive Zhao Qi acknowledged during an internal meeting in January that the company's AI models still trail global leaders like OpenAI, but pledged continued support for AI development. Bytedance itself denies the chip project - a spokesperson told Reuters the information was inaccurate without providing further details.
OpenAI released an update for GPT-5.2 Instant in ChatGPT and the API on February 10, 2026. The company says the update improves response style and quality, with more measured, contextually appropriate tone and clearer answers to advice and how-to questions that place the most important information up front. CEO Sam Altman addressed the scope of the changes: "Not a huge change, but hopefully you find it a little better."
The update targets the "Instant" variant, the model without reasoning steps. In the API, developers can access it via "gpt-5.2-chat-latest". In ChatGPT, users need to switch to "Instant" in the model picker. The model also kicks in automatically when GPT-5's router determines a reasoning model isn't necessary, or when users have run out of credits for heavier models, something that happens especially often on the free tier.
After launching on macOS, Anthropic's AI assistant Cowork is now available for Windows users. The Windows version includes the full feature set from the macOS release: file access, multi-step task execution, plugins, and MCP connectors for integrating external services. Users can also set up global and folder-specific instructions that Claude follows in every session.
Cowork on Windows is currently in Research Preview, an early testing phase. The feature is available to all paying Claude subscribers at claude.com/cowork.
Anyone who installs the system and gives it access to their files—especially sensitive or private data—should be aware of the cybersecurity risks. Generative AI can be exploited through adversarial prompts (prompt injections), among other attack vectors. This is exactly what happened to Cowork shortly after its launch.
Half of xAI's co-founders have now left Elon Musk's AI startup
Jimmy Ba is the latest co-founder to leave xAI, and like the five who left before him, he’s full of praise for the company and predicts massive AI breakthroughs ahead. Yet somehow, half of xAI’s twelve founding members have still walked out the door.