OpenAI researcher quit over ads because she doesn't trust her former employer to keep its own promises
OpenAI wants to put ads in ChatGPT and former researcher Zoe Hitzig says that’s a dangerous move. She spent two years at the company and doesn’t believe OpenAI can resist the temptation to exploit its users’ most personal conversations.
OpenAI is adding new capabilities to its Responses API that are built specifically for long-running AI agents. The update brings three major features: server-side compression that keeps agent sessions going for hours without blowing past context limits, controlled internet access for OpenAI-hosted containers so they can install libraries and run scripts, and "skills": reusable bundles of instructions, scripts, and files that agents can pull in and execute on demand.
Skills work as a middle layer between system prompts and tools. Instead of stuffing long workflows into every prompt, developers can package them as versioned bundles that only kick in when needed. They ship as ZIP files, support versioning, and work in both hosted and local containers through the API. OpenAI recommends building skills like small command-line programs and pinning specific versions in production.
OpenAI released an update for GPT-5.2 Instant in ChatGPT and the API on February 10, 2026. The company says the update improves response style and quality, with more measured, contextually appropriate tone and clearer answers to advice and how-to questions that place the most important information up front. CEO Sam Altman addressed the scope of the changes: "Not a huge change, but hopefully you find it a little better."
The update targets the "Instant" variant, the model without reasoning steps. In the API, developers can access it via "gpt-5.2-chat-latest". In ChatGPT, users need to switch to "Instant" in the model picker. The model also kicks in automatically when GPT-5's router determines a reasoning model isn't necessary, or when users have run out of credits for heavier models, something that happens especially often on the free tier.
After launching on macOS, Anthropic's AI assistant Cowork is now available for Windows users. The Windows version includes the full feature set from the macOS release: file access, multi-step task execution, plugins, and MCP connectors for integrating external services. Users can also set up global and folder-specific instructions that Claude follows in every session.
Cowork on Windows is currently in Research Preview, an early testing phase. The feature is available to all paying Claude subscribers at claude.com/cowork.
Anyone who installs the system and gives it access to their files—especially sensitive or private data—should be aware of the cybersecurity risks. Generative AI can be exploited through adversarial prompts (prompt injections), among other attack vectors. This is exactly what happened to Cowork shortly after its launch.
Half of xAI's co-founders have now left Elon Musk's AI startup
Jimmy Ba is the latest co-founder to leave xAI, and like the five who left before him, he’s full of praise for the company and predicts massive AI breakthroughs ahead. Yet somehow, half of xAI’s twelve founding members have still walked out the door.
OpenAI has upgraded Deep Research in ChatGPT. The feature now runs on the new GPT-5.2 model, as OpenAI announced on X. A key addition is that users can connect apps to ChatGPT and—potentially very useful—search specific websites. The search progress can also be tracked in real time, interrupted with questions, or supplemented with new sources. Results can now be displayed as full-screen reports.
That said, even web searches don't protect against generative AI errors, and the longer the generated text, the higher the risk of mistakes. In everyday use, targeted search queries with capable reasoning models are often more reliable. Web search significantly reduces hallucination rates overall, but doesn't eliminate them.
Isomorphic Labs, Google DeepMind's AI medicine startup, has unveiled a new system called "Isomorphic Labs Drug Design Engine" (IsoDDE) that it says outperforms AlphaFold 3. According to the company, IsoDDE doubles AlphaFold 3's accuracy when predicting protein-ligand structures that differ significantly from the training data (see left graph below).
IsoDDE outperforms previous methods in structure prediction, binding pocket recognition, and binding strength prediction, according to Isomorphic Labs. | Image: Isomorphic Labs
Beyond structure prediction, IsoDDE can identify previously unknown docking sites on proteins in seconds based solely on their blueprint, with accuracy that Isomorphic Labs says approaches that of lab experiments. Isomorphic Labs also claims the system can estimate how strongly a drug binds to its target at a fraction of the time and cost of traditional methods. These capabilities could uncover new starting points for active compounds and speed up computational screening.
Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too.
Mrinank Sharma
The Oxford-educated researcher says the time has come to move on. His departure echoes a pattern already familiar at OpenAI, which saw its own wave of safety researchers leave over concerns that the company was prioritizing revenue growth over responsible deployment. Anthropic was originally founded by former OpenAI employees who wanted to put AI safety first, making Sharma's exit all the more telling.
The new Gemini-based Google Translate can be hacked with simple words
A simple prompt injection trick can turn Google Translate into a chatbot that answers questions and even generates dangerous content, a direct consequence of Google switching the service to Gemini models in late 2025.