Ad
Skip to content

Salesforce executives signal declining trust in large language models

According to Salesforce leadership, confidence in large language models (LLMs) has slipped over the past year. The Information reports the company is now pivoting toward simple, rule-based automation for its Agentforce product while limiting generative AI in certain use cases.

"We all had more confidence in LLMs a year ago," said Sanjna Parulekar, SVP of product marketing at Salesforce. She points to the models' inherent randomness and their tendency to ignore specific instructions as primary reasons for the shift.

The company also struggles with "drift" - where AI agents lose focus when users ask distracting questions. Salesforce's own studies confirm this behavior remains a persistent challenge.

A spokesperson denied the company is backtracking on LLMs, stating they are simply being more intentional about their use. The Agentforce platform, currently on track for over $500 million in annual sales, allows users to set deterministic rules that strictly constrain the AI's capabilities.

AI News Without the Hype – Curated by Humans

Subscribe to THE DECODER for ad-free reading, a weekly AI newsletter, our exclusive "AI Radar" frontier report six times a year, full archive access, and access to our comment section.

AI news without the hype
Curated by humans.

  • More than 16% discount.
  • Read without distractions – no Google ads.
  • Access to comments and community discussions.
  • Weekly AI newsletter.
  • 6 times a year: “AI Radar” – deep dives on key AI topics.
  • Up to 25 % off on KI Pro online events.
  • Access to our full ten-year archive.
  • Get the latest AI news from The Decoder.
Subscribe to The Decoder