Salesforce's UK head advises against uniform regulation of all AI firms.
- Salesforce CEO Zahra Bahrololoumi stated that the tech giant takes all legislation seriously but wants regulations in Britain to be "proportional and tailored."
- The UK boss of Salesforce pointed out that companies developing consumer-facing AI tools and those creating enterprise AI systems face different challenges, including stricter privacy standards and corporate guidelines.
- The UK's Department of Science, Innovation and Technology spokesperson stated that the planned AI rules would be specifically tailored to the companies that create the most advanced AI models, rather than imposing general rules on the use of AI.
The UK chief executive of wants the Labor government to regulate artificial intelligence but emphasizes that policymakers should not categorize all technology companies developing AI systems in the same manner.
Zahra Bahrololoumi, CEO of UK and Ireland at Salesforce, stated that her company takes all legislation seriously. She added that any British proposals aimed at regulating AI should be "proportional and tailored."
Bahrololoumi pointed out that there is a distinction between companies that create consumer-facing AI tools, such as OpenAI, and those that develop enterprise AI systems, like Salesforce. She explained that consumer-facing AI systems, such as ChatGPT, encounter fewer limitations than enterprise-grade products, which must adhere to stricter privacy standards and corporate guidelines.
Bahrololoumi stated on Wednesday that what they seek is legislation that is specific, appropriate, and customized.
"There is a distinction between organizations that use consumer-facing technology and consumer tech, and those that use enterprise tech. Although we have different roles in the ecosystem, we are a B2B organization," she stated.
The UK's Department of Science, Innovation and Technology (DSIT) spokesperson stated that the planned AI rules would be specifically tailored to the companies developing the most advanced AI models, rather than imposing general rules on the use of AI.
Salesforce may not be subject to the same rules as companies that develop their own foundational models like OpenAI.
The DSIT spokesperson stated that they acknowledge the potential of AI to drive growth and enhance efficiency and are fully dedicated to promoting the growth of their AI industry, especially as the adoption of the technology accelerates throughout the economy.
Data security
Salesforce emphasizes the ethical and safety aspects of its Agentforce AI technology platform, which enables enterprise companies to create their own AI "agents" - self-sufficient digital workers that perform tasks for various functions, such as sales, service, or marketing.
Salesforce's "zero retention" feature ensures that customer data is never stored outside of the platform. Consequently, generative AI prompts and outputs aren't stored in Salesforce's large language models, which serve as the foundation for today's genAI chatbots, such as ChatGPT.
It is unclear what data is used to train consumer AI chatbots like ChatGPT, Claude, and Meta's AI assistant, as well as where that data is stored, according to Bahrololoumi.
""With ChatGPT and consumer models, you don't know what data they're using to train them," she said to CNBC."
Microsoft's Copilot, aimed at enterprise customers, carries heightened risks, according to Bahrololoumi, who cited a Gartner report highlighting the AI personal assistant's security risks to organizations.
OpenAI and Microsoft were not immediately available for comment when contacted by CNBC.
AI concerns 'apply at all levels'
CCS Insight's chief of enterprise research, Bola Rotibi, informed CNBC that while enterprise-focused AI providers are more aware of enterprise-level security and data privacy requirements, it is incorrect to assume that regulations would only target consumer-facing firms.
Regardless of whether it is a consumer or enterprise, all concerns related to consent, privacy, transparency, and data sovereignty must be addressed, as these details are regulated by laws such as GDPR, which took effect in the UK in 2018.
Enterprise application providers like Salesforce may be perceived as more trustworthy by regulators in terms of AI compliance measures, as they are experienced in delivering enterprise-level solutions and management support.
Enterprise solution providers like Salesforce are likely to adopt a more nuanced review process for their AI services, according to her.
At Salesforce's Agentforce World Tour in London, CNBC interviewed Bahrololoumi about the company's new "agentic" AI technology and its potential benefits for partners and customers.
After U.K. Prime Minister Keir Starmer's Labour refrained from introducing an AI bill in the King's Speech, her remarks come after the government announced plans to establish "appropriate legislation" for AI, without offering further details.
Technology
You might also like
- Tech bros funded the election of the most pro-crypto Congress in America.
- Microsoft is now testing its Recall photographic memory search feature, but it's not yet flawless.
- Could Elon Musk's plan to reduce government agencies and regulations positively impact his business?
- Some users are leaving Elon Musk's platform due to X's new terms of service.
- The U.S. Cyber Force is the subject of a power struggle within the Pentagon.