A developer at OpenAI known as "Roon" on X explains why large language models never behave exactly the same way twice. Roon says a model's "personality" can shift with every training run, even if the dataset doesn't change. That's because the training process depends on random elements like reinforcement learning, so each run makes different choices in what's called "model space." As a result, every training pass produces slightly different behavior. Roon adds that even within a single training run, it's nearly impossible to recreate the same personality.

Ad

Video: via X

OpenAI tries to keep these "personality drifts" in check, since users often get attached to a model's unique quirks. This was especially true with the earlier "sycophancy" version of GPT-4o, which some users still miss. Roon, however, wasn't a fan. He even publicly wished for that "insufficiently aligned" model's "death" before deleting the post.

Ad
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Sources
Matthias is the co-founder and publisher of THE DECODER, exploring how AI is fundamentally changing the relationship between humans and computers.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.