Site icon IT World Canada

Salesforce customers receive generative AI boost with new launch

Salesforce CEO Marc Benioff is shown delivering a keynote yesterday in New York City to mark the launch of AI Cloud.

Salesforce on Monday launched AI Cloud, which it described as a means for its customer base to experience the benefits of generative artificial intelligence (AI) safely and securely.

According to the San Francisco-based CRM giant, the offering is a “suite of capabilities optimized for delivering trusted open and real-time generative experiences across all applications and workflows.”

The heart of AI Cloud, it added, is Einstein, the company’s AI engine that was first launched in September 2016 and “now powers over one trillion predictions per week.”

Salesforce stated in a backgrounder document that “unlike consumer AI, like Apple’s Siri and Amazon Alexa, enterprise customers require higher levels of trust and security, especially in regulated industries.

“Salesforce has thousands of customers each using their own models. Einstein needs to ensure that the models are trusted, that customer data remains safe and secure, delivers accurate, unbiased results, and adheres to compliance requirements for customers dealing with more sensitive data, as in the finance, healthcare, or government sectors. And once a new model is shipped, it has to be re-trained on their freshest data.”

The core piece of the security strategy is the Einstein Trust Layer, a standard that “helps resolve enterprise concerns of risks associated with adopting generative AI by meeting enterprise data security and compliance demands. The Einstein Trust Layer prevents LLMs from retaining sensitive customer data, ensuring customers can maintain data governance controls, while still leveraging the immense potential of generative AI.”

That is critical, new findings from a research study conducted by Salesforce released this month reveal. A recent survey of over 4,000 full-time employees found that while upwards of 73 per cent have plans to use the technology, a majority admit that generative AI poses new security risks.

The survey revealed that:

Paula Goldman, chief ethical and human use officer at Salesforce, said that generative AI has the potential to help businesses connect with their audiences in new, more personalized ways. As companies embrace this technology, they need to ensure that there are ethical guidelines and guardrails in place for safe and secure development and use of generative AI.

The Einstein Trust Layer, the company added, will also provide deployment capabilities for any relevant Large Language Model (LLM), while helping organizations maintain their data privacy, security, residency, and compliance goals.

One such organizations is RBC Wealth Management USA. Greg Beltzer, its head of technology, said, “Embedding AI into our CRM has delivered huge operational efficiencies for our advisors and clients.

“We believe that this technology has the potential to transform the way businesses interact with their customers, deliver personalized experiences and drive customer loyalty.”

Also Monday, at an event in New York City called Salesforce AI Day, the company announced an expansion of its Generative AI Fund from US$250 million to US$500 million.

Paul Drews, managing partner of Salesforce Ventures, said that the expansion “enables us to work with even more entrepreneurs who are accelerating the development of transformative AI solutions for the enterprise.”

The fund has already invested in several AI firms, including Hearth, You.com, Anthropic and Cohere.

During a keynote address at the NYC event, Salesforce chief executive officer (CEO) Marc Benioff said that with the AI Cloud launch, the company’s customer base now has the “ability to use generative AI without sacrificing their data privacy and data security. This is critical for each and every one of our customers all over the world for every transaction and every conversation in Salesforce begins and ends with the word ‘Trust’. We understand that well.

“And there’s one other critical part of all of this. It’s not just about trusted AI, and delivering the technology to the right person at the right time, but it’s also about responsibility.

“As we’re all going to learn – because we’re now on a societal AI journey – there is going to be a lot about responsibility in this technology. We’ve all seen the movies, and we’ve all seen where this can go, haven’t we? We all have these crazy ideas in our head of what could happen. There are many different possible scenarios, so that’s why responsible AI use is so critical.”

Exit mobile version