AI startup Thinking Machines wants to make large language models more predictable. The team is studying why large language models sometimes give different answers to the same question, even when temperature is set to 0, a setting that should always return the most probable answer.

Ad
Despite a temperature setting of 0, Deepseek 3.1 generates different answers to the same query. | Image: Thinking Machines

According to Thinking Machines, the problem isn't just GPU precision, which they say is "not entirely wrong" but "doesn’t reveal the full picture." Server load also affects how a model responds: when the system is under heavy load, the same model can produce slightly different results. To fix this, the team developed a custom inference method that keeps outputs consistent regardless of system load. More predictable behavior like this could make AI-supported research more reliable.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.