Current language model training leaves large parts of the internet on the table
Large language models learn from web data, but which pages actually make it into training sets depends heavily on a seemingly mundane choice: the HTML extractor. Researchers at Apple, Stanford, and the University of Washington found that three common extraction tools pull surprisingly different content from the same web pages.
A new integration links Figma's design platform directly with OpenAI's Codex. Teams can automatically generate editable Figma designs from code and convert designs into working code. It runs on the open MCP standard, supports Figma Design, Figma Make, and FigJam, and is set up in the Codex desktop app for macOS.
Until now, moving between Figma and code was mostly a one-way street. Dev Mode offered basic HTML/CSS snippets, plugins exported designs as React or HTML, and Figma Make generated React components from text input. These tools worked in isolation without understanding the full project. The new integration creates an end-to-end connection where the AI accesses code, Figma files, and the design system simultaneously.
Figma was one of the first partners with its own ChatGPT app and uses ChatGPT Enterprise internally. According to OpenAI, over one million people access Codex weekly, with usage up more than 400 percent since the start of the year.
Claude Code now remembers what it learns across sessions - automatically tracking debugging patterns, project context, and preferred working methods without manual input. Previously, users had to log this information themselves or use /init to populate CLAUDE.md files. The new auto-memory function builds on that that: Claude creates a MEMORY.md file per project, stores its findings, and pulls them up automatically in later sessions. Work through a tricky debugging problem once, and you won't have to explain the fix again. Users can also explicitly ask Claude to save specific information. The feature is on by default and can be disabled via /memory, the settings file, or an environment variable.
Another recent update: locally running sessions can now be continued on the go via smartphone, tablet, or browser at claude.ai/code - without data migrating to the cloud.
Anthropic can't stop humanizing its AI models, now Claude Opus 3 gets a retirement blog
Anthropic is retiring its Claude Opus 3 AI model and letting it publish weekly essays on Substack. The company says it conducted “retirement interviews” to ask the model about its wishes, and it “enthusiastically” agreed. The move is a prime example of how AI companies keep pushing the humanization of their products, blurring the line between philosophical caution and PR stagecraft.
Claude Code users can now continue a locally running programming session from their smartphone, tablet, or browser. The session keeps running on the user's own machine - no data moves to the cloud. Local files, servers, and project configurations all remain accessible. Users connect through claude.ai/code or the Claude app for iOS and Android and can switch seamlessly between terminal, browser, and phone. If the network drops, the session automatically reconnects, though it ends after roughly ten minutes offline.
The feature is initially available as a research preview for Max subscribers, with Pro users next in line. Unlike Claude Code on the web, which has been running tasks in Anthropic's cloud environments since last year, remote control sessions run entirely on the user's own computer.
Anthropic now lets Claude switch independently between Excel and PowerPoint, for example, running an analysis and then building a presentation directly from the results. The company is also expanding Cowork for enterprise customers with private plugin marketplaces, letting admins curate and distribute plugin collections to specific teams. New templates cover HR, design, engineering, finance, asset management, and more.
In finance, new MCP interfaces for FactSet and MSCI provide real-time market data and index analysis; S&P Global (Capital IQ Pro) and LSEG have contributed their own plugins.
New third-party integrations include Google Workspace, DocuSign, Salesforce, Slack, and FactSet. Admins gain finer user-access controls plus OpenTelemetry support for cost and usage monitoring. The Excel-PowerPoint feature is available as a research preview on all paid plans. Cowork is Anthropic's desktop tool for agent-based office work; plugins were added in late January but have known security vulnerabilities.
Deepmind suggests AI should occasionally assign humans busywork so we do not forget how to do our jobs
AI systems should sometimes give tasks to humans they could easily handle themselves, just so people don’t forget how to do their jobs. That’s one of the more striking recommendations from a new Google Deepmind paper on how AI agents should delegate work.