The primary concern corporations have with gen AI usage is not hallucinations.
- The downside of generative artificial intelligence includes hallucinations, code errors, copyright infringement, and perpetuated bias.
- But what organizations worry about most is data leaks.
- Nearly half of companies that implemented AI solutions experienced unintended data exposure, while data security was the top concern for 80% of companies, according to recent executive surveys.
While generative artificial intelligence offers numerous benefits, it also presents challenges such as hallucinations, code errors, copyright infringement, perpetuated bias, and data leaks, which organizations are most concerned about.
According to a recent survey from Alteryx, 77% of companies report successful gen AI pilots, but 80% cite data privacy and security concerns as the top challenges in scaling AI. Meanwhile, 45% of organizations encountered unintended data exposure when implementing AI solutions, according to AvePoint's 2024 AI and Information Management Report. Microsoft AI's leak of 38 terabytes of data late last year is just one example of how big this problem can get.
Dana Simberkoff, chief risk, privacy and information security officer at AvePoint, stated that AI has intensified some of the difficulties related to data management.
Simberkoff explains that much of the leaked information is unstructured data that's sitting in collaboration spaces, unprotected but previously undiscovered due to the difficulty around finding it. This data is often referred to as "dark data."
Arvind Jain, CEO and cofounder of Glean, a company that creates AI-powered enterprise-wide search tools, has been named to the 2024 CNBC Disruptor 50 list. Jain believes that there is immense pressure on chief information officers and related roles to deploy AI, which can lead to errors in the race to modernize. He says that AI fundamentally changes the way we search for information, making it easier to find what we need without having to look anywhere. "It was so hard to find anything. Nobody knows where to look," said Jain. "That's the thing that AI fundamentally changes. We don't have to go and look anywhere anymore. You just have to ask a question."
Jain asserts that enterprise data often contains privacy concerns, and that inadequate permissions can expose sensitive information. Although his search platform prioritizes organizational permissions, it is the responsibility of leaders to safeguard their data before integrating AI.
Shining a light on unprotected 'dark data'
Besides customer and employee personal information, there are various types of sensitive documents that can cause harm if accessed by the wrong parties within an organization. These include termination letters, confidential discussions about mergers and acquisitions, and other sensitive documents. The risks are real and can arise from employee dissatisfaction, insider training, or other factors.
Ignoring information is never beneficial," said Simberkoff. "Illuminating that hidden data, suddenly, it becomes impossible to ignore it.
Simberkoff follows the philosophy, "Protecting what we value and enhancing what we measure."
How can leaders enhance data permissions and safeguards prior to or in response to AI implementation?
Jason Hardy, chief technology officer for AI at data infrastructure company Hitachi Vantara, stated that the issue is not with the AI turning on, but rather the six crucial steps that must be taken prior to understanding your data. These steps include logging data, utilizing vendor-supplied tools to process the data through structuring and search protocols, and consistently reviewing the information over time.
Hardy emphasizes the importance of policies to prevent leaks and enforcement to manage information if it is released.
"Training is crucial to ensure end users are aware of the information you're responsible for. We have approved tools to use, but also as we bring them into our systems, let's have safeguards in place."
Prioritizing high-risk information and implementing data labeling, classification, and tagging is crucial for your organization's ecosystem, according to Simberkoff.
An anti-rushing approach to AI implementation
Simberkoff advises leaders not to rush the adoption of AI and to pause if necessary. Instead, she suggests taking incremental steps, such as starting with an acceptable use policy and strategy, and testing the waters with a pilot.
Understanding your data over time is crucial as regulations and laws change.
By doing the right thing early on, you won't end up in the headlines of any popular news outlets.
AI is not a flawless technology, as Simberkoff emphasizes to leaders. She explains that these algorithms are prone to errors and their accuracy depends on the quality of the data they are fed. Therefore, it is crucial to verify and utilize AI for its intended purpose.
User education is crucial, as it is essential to ensure that the AI assistant is performing correctly and not straying off-topic.
Jain advises companies, particularly large enterprises, to adopt a centralized AI strategy to evaluate tools and decide which data they will connect to. However, limited information can result in limited value, so it is essential to connect as much information as possible while maintaining appropriate permissions. Moreover, a soft rollout is a wise approach to test a new program before implementing it company-wide.
AI is our best friend," Simberkoff said. "It's going to really push the organization to take those steps that they should have been taking all along.
Technology
You might also like
- European SpaceX competitor secures $160 million for reusable spacecraft to transport astronauts and cargo to orbit.
- Palantir experiences a 9% increase and sets a new record following Nasdaq announcement.
- Super Micro faces delisting from Nasdaq after 85% stock decline.
- Elon Musk's xAI is seeking to raise up to $6 billion to purchase 100,000 Nvidia chips for Memphis data center.
- Despite a miss on sales, Alibaba's premarket stock rises 3%.