Newspoint Logo

Salesforce scales back use of AI models over reliability concerns

Newspoint


Salesforce scales back use of AI models over reliability concerns


Salesforce, a leading player in the enterprise software industry, is rethinking its reliance on large language models after encountering reliability issues in real-world business use.

According to The Information, Salesforce executives say trust in generative AI has declined over the past year.

This has forced the company to emphasise more predictable, rule-based automation in its flagship AI product, Agentforce, rather than fully LLM-driven workflows.


Hero Image

Salesforce shifts focus to deterministic automation


In light of these concerns, Salesforce is moving away from open-ended generative AI. The company is now focusing on "deterministic" automation in its flagship AI product, Agentforce.

The shift aims to reduce unpredictability and ensure consistent results for tasks requiring high precision.

This strategy comes as part of broader internal changes at Salesforce under CEO Marc Benioff's leadership.


AI limitations lead to workforce reduction


Benioff recently revealed on a podcast that Salesforce has cut its support staff from some 9,000 employees to 5,000 after deploying AI agents.

While automation helped reduce headcount, it also highlighted the limitations of AI when customers started using these systems at scale.

Executives have noted that the models struggle with complex instructions, which can be costly for enterprise customers.


Salesforce addresses AI reliability issues with deterministic triggers


One case highlighting these challenges was Vivint, a home security firm using Agentforce for customer support.

Despite clear instructions to send satisfaction surveys after every interaction, some were missed without any apparent reason.

To tackle this issue, Vivint and Salesforce introduced deterministic triggers to ensure surveys were sent every time.


Salesforce tackles AI drift in chatbots


Salesforce has also flagged an internal problem called AI "drift," where chatbots lose focus when users ask unrelated questions.

For instance, a bot designed to guide users through a form can get sidetracked instead of completing its main task.

The company is working on these issues as part of its ongoing efforts to improve the reliability and effectiveness of its AI systems.