Ad
Skip to content
Read full article about: Google Labs turns Stitch into a full AI design platform that converts plain text into user interfaces

Google Labs has turned its design tool Stitch into a full AI-powered software design platform. The tool lets users generate user interfaces from natural language prompts, an approach Google is calling "vibe design." Instead of starting with traditional wireframes, users simply describe what they want the experience to look and feel like. Stitch provides an infinite canvas where images, text, and code can all be dropped in as context.

A new design agent analyzes the entire project and can explore multiple ideas at the same time. Users can make real-time changes directly on the canvas using voice control. Design rules can be shared across tools through a new DESIGN.md format, and static designs get converted straight into clickable prototypes.

Stitch is live at stitch.withgoogle.com for users 18 and older in every region where Gemini is available. Developers can also plug it into tools like AI Studio via an MCP server and an SDK. Google is pitching the tool at both professional designers and founders who have no design background.

Read full article about: Google Deepmind upgrades Gemini API with multi-tool chaining and context circulation

Google Deepmind has expanded the Gemini API with several new tools for developers. Built-in tools like Google Search and Google Maps can now be combined with custom functions in a single request. Previously, developers had to handle each step separately, which was slower and more cumbersome.

Results from one tool can now be automatically passed to another through what Google calls context circulation. Each tool call also gets a unique ID, making it easier to track down bugs.

Moreover, Google Maps is now available as a data source for the Gemini 3 model family, providing location data, business information, and commute times. Google recommends the new Interactions API for building these workflows.

Read full article about: OpenAI turns model compression into a talent hunt with its 16 MB "Parameter Golf" challenge

OpenAI challenges researchers to build the best language model in just 16 MB - and uses the competition to scout talent. In an open research competition called "Parameter Golf," OpenAI is asking developers to build the best possible language model under tight constraints: weights and training code combined must stay under 16 MB, and training can take no longer than ten minutes on eight H100 GPUs. Submissions are judged on compression performance against a fixed FineWeb dataset.

OpenAI is putting up one million dollars in computing credits through its partner Runpod. Top performers may get invited for job interviews - the company plans to hire a small group of junior researchers in June, including students and Olympiad winners. The GitHub repository includes baseline models, evaluation scripts, and a public leaderboard. Anyone 18 or older in supported countries can participate through April 30.

The competition for AI talent among big tech companies is more intense than ever. Meta has repeatedly poached top researchers from OpenAI, in some cases offering compensation packages reportedly worth up to 300 million dollars.

Ad

Pentagon plans to let AI companies train models on classified data

The US Department of War is working to set up secure environments where AI companies can train their models on classified data. Until now, models were only allowed to read classified data, not learn from it.

Beijing approves Nvidia's H200 chip sales as the company builds a China-ready version of its Groq inference chip

Nvidia has received long-awaited approval from Beijing to sell its second-most-powerful AI chip, the H200, to Chinese customers, Reuters reports. The company had halted production of the chip last year due to regulatory hurdles on both sides of the Pacific.

Ad

OpenAI ships GPT-5.4 mini and nano, faster and more capable but up to 4x pricier

OpenAI has released two new compact models—GPT-5.4 mini and nano—built for coding assistants, subagents, and computer control. GPT-5.4 mini nearly matches the full model’s performance, but both new models come with a steep price hike over their predecessors.

Ad

GTC 2026: With Groq 3 LPX, Nvidia adds dedicated inference hardware to its platform for the first time

At GTC 2026, Nvidia expanded the Vera Rubin platform it introduced at CES with custom CPU racks, dedicated inference chips, a new storage architecture, an inference operating system, open model alliances, and agent security software.