Ad
Skip to content

Ex-OpenAI CTO Mira Murati introduces Tinker, an API for fine-tuning of open-weight LLMs

Image description
Tinker

Key Points

  • Thinking Machines, a company founded by former OpenAI CTO Mira Murati, has launched Tinker, an API that helps teams with limited computing resources train and customize open-source language models for their own needs.
  • Tinker handles infrastructure management, leverages LoRA for efficient parallel fine-tuning on shared hardware, and supports popular open-weight models like LLaMA and Qwen-235B-A22B, with minimal code changes required to switch models.
  • The API is currently available in closed beta and is free for now, but a usage-based pricing structure will be introduced in the future.

Thinking Machines, the startup founded by former OpenAI CTO Mira Murati, has introduced its first product: Tinker, an API for training language models.

Tinker is meant to make it easier for teams to build open source models, even if they lack access to large-scale compute resources. The service is aimed at researchers and developers who want to train or fine-tune open-weight models without having to manage complex infrastructure themselves.

According to the company, the goal is to enable "more people to do research on cutting-edge models and customize them to their needs." At launch, Tinker works with several open-weight models from Meta (Llama) and Alibaba (Qwen), including larger models like Qwen-235B-A22B. Users can switch between models by updating a single string in the code, according to the company.

Tinker supports open-weight models from Meta and Alibaba, including large-scale variants like Qwen-235B-A22B. | Image: Thinking Machines

Tinker runs as a managed service on the company’s own compute clusters. Training jobs can be started immediately, while the platform handles resource management, fault tolerance, and scheduling. The platform uses LoRA (Low-Rank Adaptation), which allows several fine-tuning runs to run in parallel on shared hardware. This approach lets different training jobs use the same compute pool, which should help reduce costs, the company says.

Ad
DEC_D_Incontent-1

Beta and Pricing

Tinker is currently in closed beta. Interested users can join the waitlist. The service is free during the beta, with usage-based pricing planned for the coming weeks.

Developers can also use the Tinker Cookbook, an open library of standard post-training methods that run on the Tinker API. The cookbook is intended to help avoid common fine-tuning errors.

Murati left OpenAI in fall 2024 after reports of internal tensions. By launching a fine-tuning API, she signals that she doesn't see OpenAI's models as unbeatable or expect superintelligence anytime soon.

Along with other former OpenAI colleagues like co-founder John Schulman and researchers Barret Zoph and Luke Metz, Murati appears to believe that fine-tuned open-weight models offer more flexibility and economic value right now than closed models like GPT-5. Otherwise, it would be hard to explain the focus on building specialized fine-tuning tools.

Ad
DEC_D_Incontent-2

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.