Salesforce executives signal declining trust in large language models
According to Salesforce leadership, confidence in large language models (LLMs) has slipped over the past year. The Information reports the company is now pivoting toward simple, rule-based automation for its Agentforce product while limiting generative AI in certain use cases.
"We all had more confidence in LLMs a year ago," said Sanjna Parulekar, SVP of product marketing at Salesforce. She points to the models' inherent randomness and their tendency to ignore specific instructions as primary reasons for the shift.
The company also struggles with "drift" - where AI agents lose focus when users ask distracting questions. Salesforce's own studies confirm this behavior remains a persistent challenge.
A spokesperson denied the company is backtracking on LLMs, stating they are simply being more intentional about their use. The Agentforce platform, currently on track for over $500 million in annual sales, allows users to set deterministic rules that strictly constrain the AI's capabilities.
AI News Without the Hype – Curated by Humans
As a THE DECODER subscriber, you get ad-free reading, our weekly AI newsletter, the exclusive "AI Radar" Frontier Report 6× per year, access to comments, and our complete archive.
Subscribe now