Ad
Ad
Ad
Short

Open-weight reasoning models often use far more tokens than closed models, making them less efficient per query, according to Nous Research. Models like DeepSeek and Qwen use 1.5 to 4 times more tokens than OpenAI and Grok-4—and up to 10 times more for simple knowledge tasks. Mistral's Magistral models stand out for especially high token use.  

Average tokens used per task by different AI models. | Image: Nous ResearchIn contrast, OpenAI's gpt-oss-120b, with very short reasoning paths, shows that open models can be efficient, especially for math problems. Token usage depends heavily on the type of task. Full details and charts are available at Nous Research.

High token use can offset low prices in open models. | Image: Nous Research
Short

Dynamics Lab has launched Mirage 2, the latest version of its generative game world engine. With Mirage 2, users can upload their own images - like sketches or photos - and turn them into interactive game worlds. The engine also lets players change the game in real time by typing in commands. Worlds can be saved and shared. While Mirage 2 makes clear technical gains over its predecessor, it still struggles with precise controls and visual stability. In both areas, Genie 3 from Google DeepMind is far ahead, but Genie 3 isn't available yet and likely requires much more computing power. A Mirage 2 demo is available online.

Ad
Ad
Ad
Ad
Google News