OpenAI has redesigned how model selection works in ChatGPT. Instead of individual model names, users now see up to three tiers at first glance, depending on their subscription: "Instant" for quick, everyday responses, "Thinking" for more complex tasks, and "Pro" for the most powerful models. The new menu lets users pick a specific model version from a dropdown - options include "Latest" (currently 5.4), 5.2, 5.0, or o3.
More granular settings are available under "Configure." That's where users can turn on the old Auto function, which lets ChatGPT switch from Instant to Thinking when it detects a more complex question. OpenAI has also recently simplified the repeat menu under Answers and added the "Nerdy" personality style. On top of that, the company is rolling out GPT-5.4 mini and improving GPT-5.3 Instant, which now uses less sensationalized wording according to the changelog.
The so-called routing system—where ChatGPT decides which model handles a given request—has been a sore spot for OpenAI for a while now. Many users found the system opaque when it first launched, since the router didn't always pick the most capable model. That fueled suspicion that OpenAI was quietly steering expensive requests toward cheaper models to save on compute costs.
Andreessen Horowitz invests $43 million in Deeptune, a startup that trains AI agents in simulated workplaces.
Deeptune builds simulated work environments where AI agents learn to handle multi-step tasks in software like Slack or Salesforce. CEO Tim Lupo compares the approach to flight simulators for pilots: instead of just learning from text, AI models practice in realistic replicas of workplaces, like those of accountants, lawyers, or software engineers. According to Fortune, Deeptune has already built hundreds of these environments for leading AI labs.
Andreessen Horowitz leading a $43 million funding round signals how seriously the industry takes this training method. Andreessen partner Marco Mascorro told Fortune that AI models are increasingly learning through interaction rather than human-curated data. According to ResearchAndMarkets, the global market for this type of AI training is expected to grow from $11.6 billion in 2025 to over $90 billion by 2034.
An AI agent acting on its own triggered a significant security breach at Meta, The Information reports.
Last week, a Meta engineer used an internal agent tool to analyze a technical question another employee had posted in an internal forum. The agent then posted a response to the forum on its own - without any authorization. A second employee followed the agent's advice, setting off a chain reaction: for nearly two hours, systems containing sensitive corporate and user data were accessible to unauthorized employees.
Meta classified the incident as Sev 1, its second-highest security level. A Meta spokesperson said no user data was misused and there's no evidence anyone exploited the access or made any data public. The agent's post was at least labeled as AI-generated.
This isn't an isolated case. Summer Yue, head of safety at Meta's AI division, described on X back in February how an OpenClaw agent independently deleted emails despite clear instructions not to - and ignored her commands to stop. Amazon Web Services dealt with a similar problem in December, when agent-driven code changes contributed to a 13-hour outage of one of its tools.
Microsoft fears OpenAI's AWS deal may violate Azure exclusivity contract.
"We are confident that OpenAI understands and respects the importance of living up to [its] legal obligation," a Microsoft spokesperson told The Information. A statement that sounds less like confidence and more like a warning.
Microsoft holds the exclusive rights to sell OpenAI's models directly to cloud customers through its Azure platform. But OpenAI and AWS are planning a new product, what they call a "stateful runtime environment," that runs OpenAI models entirely on AWS infrastructure without relying on the Microsoft-hosted versions.
AWS doesn't intend to sell model APIs directly but rather offer tools for developing custom AI applications, effectively sidestepping the contractual exclusivity on a technical level.
Google Labs has turned its design tool Stitch into a full AI-powered software design platform. The tool lets users generate user interfaces from natural language prompts, an approach Google is calling "vibe design." Instead of starting with traditional wireframes, users simply describe what they want the experience to look and feel like. Stitch provides an infinite canvas where images, text, and code can all be dropped in as context.
A new design agent analyzes the entire project and can explore multiple ideas at the same time. Users can make real-time changes directly on the canvas using voice control. Design rules can be shared across tools through a new DESIGN.md format, and static designs get converted straight into clickable prototypes.
Stitch is live at stitch.withgoogle.com for users 18 and older in every region where Gemini is available. Developers can also plug it into tools like AI Studio via an MCP server and an SDK. Google is pitching the tool at both professional designers and founders who have no design background.
Google Deepmind has expanded the Gemini API with several new tools for developers. Built-in tools like Google Search and Google Maps can now be combined with custom functions in a single request. Previously, developers had to handle each step separately, which was slower and more cumbersome.
Results from one tool can now be automatically passed to another through what Google calls context circulation. Each tool call also gets a unique ID, making it easier to track down bugs.
Moreover, Google Maps is now available as a data source for the Gemini 3 model family, providing location data, business information, and commute times. Google recommends the new Interactions API for building these workflows.
OpenAI challenges researchers to build the best language model in just 16 MB - and uses the competition to scout talent. In an open research competition called "Parameter Golf," OpenAI is asking developers to build the best possible language model under tight constraints: weights and training code combined must stay under 16 MB, and training can take no longer than ten minutes on eight H100 GPUs. Submissions are judged on compression performance against a fixed FineWeb dataset.
OpenAI is putting up one million dollars in computing credits through its partner Runpod. Top performers may get invited for job interviews - the company plans to hire a small group of junior researchers in June, including students and Olympiad winners. The GitHub repository includes baseline models, evaluation scripts, and a public leaderboard. Anyone 18 or older in supported countries can participate through April 30.
The competition for AI talent among big tech companies is more intense than ever. Meta has repeatedly poached top researchers from OpenAI, in some cases offering compensation packages reportedly worth up to 300 million dollars.
Apple reportedly blocks vibe-coding apps from publishing updates
Apple is preventing popular vibe-coding apps like Replit and Vibecode from releasing new versions. The company points to existing guidelines, but the move targets potential competition to its own ecosystem.