Ad
Skip to content

Why GPT-4o's "personality" can't be recreated

A developer at OpenAI known as "Roon" on X explains why large language models never behave exactly the same way twice. Roon says a model's "personality" can shift with every training run, even if the dataset doesn't change. That's because the training process depends on random elements like reinforcement learning, so each run makes different choices in what's called "model space." As a result, every training pass produces slightly different behavior. Roon adds that even within a single training run, it's nearly impossible to recreate the same personality.

Video: via X

OpenAI tries to keep these "personality drifts" in check, since users often get attached to a model's unique quirks. This was especially true with the earlier "sycophancy" version of GPT-4o, which some users still miss. Roon, however, wasn't a fan. He even publicly wished for that "insufficiently aligned" model's "death" before deleting the post.

Ad
DEC_D_Incontent-1

AI News Without the Hype – Curated by Humans

As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.

AI news without the hype
Curated by humans.

  • Over 20 percent launch discount.
  • Read without distractions – no Google ads.
  • Access to comments and community discussions.
  • Weekly AI newsletter.
  • 6 times a year: “AI Radar” – deep dives on key AI topics.
  • Up to 25 % off on KI Pro online events.
  • Access to our full ten-year archive.
  • Get the latest AI news from The Decoder.
Subscribe to The Decoder