AI copilots are increasing the difficulty and expense of defending against internal breaches.
- The primary way organizations implement generative AI is through Copilot programs that use generative AI, such as Microsoft's Copilot, GitHub Copilot, Salesforce's Einstein Copilot, and Adobe Firefly.
- These programs may increase the likelihood of insiders obtaining unintended information.
- The insecurity of data is not due to the AI itself, but to the failure of organizations to clean up the data and establish new access permissions before using copilot.
Data security experts, including Matt Radolec, vice president of incident response at Varonis, claim that security through obscurity is no longer effective.
According to a recent Gartner report, the primary way organizations implement gen AI is through copilot programs that use generative AI, such as Microsoft's Copilot, GitHub Copilot, Salesforce's Einstein Copilot, and Adobe Firefly.
Radolec stated that copilots have pass-through permissions, which allows the technology to access whatever an employee can access. However, the copilot has the advantage of being able to quickly sift through a corporate-wide database and share files and data with someone who may not have approval to access them.
The presence of copilots can increase the likelihood of insiders gaining access to sensitive information, regardless of their intentions.
According to Radolec, the reason data is insecure is not due to the AI itself, but because organizations have not properly secured access to the data obtained through a copilot.
This is where permissions come into play.
Data permissions and zero-trust security
Access control is a crucial aspect of cybersecurity, and best practices typically involve granting employees the least amount of access necessary to perform their job duties. This approach aligns with the zero trust architecture, which aims to prevent attacks by not automatically trusting any user or device.
Instead of clicking buttons to share documents with everyone, people often become lazy and use permissions, which can lead to unintended consequences. Radolec emphasized the need for individuals and the industry to prioritize data security and take care of it as if it were printed out.
Malicious insiders, despite organizations' belief that all hires have pure intentions, pose a real threat.
"Radolec stated that employees are leaking sensitive information to competitors, using it for personal gain, and committing fraud in some cases. Earlier this year, a client hired Varonis to determine how an administrative employee obtained the exact highest raise they could get by abusing their privileges to access data. Radolec emphasized that these actions make it easy for employees to commit fraud."
Radolec has observed criminal groups using call centers with five to six individuals to steal identities on a large scale. "Copilots simplify the process. Now, they can request a list of customers and their social security numbers separated by customer lifetime value or total income directly from the copilot without waiting for the phone call to occur."
Shawnee Delaney, CEO of Vaillance Group, suggests that there are many ways to reduce the risk of data breaches through what she calls the "security onion." One way to do this is by adhering to standards around the collection and storage of customer data, as seen in Oracle's recent $115 million privacy settlement.
To safeguard against malicious actors within an organization, a comprehensive strategy that addresses both technical and human aspects is necessary. Delaney's background in recruiting spies as a former Defense Intelligence Agency case officer and establishing insider threat programs at companies such as Merck and Uber makes her well-suited to handle the range of threats an organization may face from within and the methods for managing them effectively.
Where technical and human cybersecurity meet
To prevent internal threats from manifesting, Delaney emphasizes the significance of conducting thorough risk assessments and third-party security audits, especially with third-party gen AI tools. Organizations should prioritize obtaining vendor security certifications like ISO/IEC 27001, regularly updating software, and conducting continuous monitoring and anomaly detection.
Delaney suggests that segmenting sensitive data and using strong encryption methods for data at rest and in transit can help mitigate risks associated with gen AI.
A Stanford study found that engineering teams using Copilot for GitHub were more likely to write insecure code, which is more vulnerable to attacks from bad actors. This highlights the importance of human review in the engineering process. GitHub has responded by incorporating an AI-based vulnerability prevention system in Copilot that blocks insecure coding patterns in real-time and includes Copilot Autofix to find and fix code vulnerabilities.
As a member of the AI executive oversight committee at Ensono, Meredith Graham, the chief people officer, ensures that the people aspect of AI processes is considered, even though she is not a technologist. She brings her expertise in the unintended consequences of data usage and flow to the committee.
To ensure data security, the team has decided to limit access to Microsoft Copilot, even though Ensono uses it.
The company is developing a tool to better control its data security, which will limit access to personal and pay information about employees to only approved human resources personnel. The tool will be launched gradually across the organization later this year.
"The main challenge is integrating the necessary information into the tool while ensuring proper security measures are in place, according to Graham. Initially, the focus will be on sales and marketing data, followed by a transition to more sensitive information such as HR and financial data."
Securizing enterprise-wide gen AI technology requires a multifaceted approach. Gen AI has made cybersecurity mistakes more costly, and insiders, whether malicious or not, pose a bigger threat than ever before. While gen AI is not to blame, as a catalyst for corporate change, copilot-like technologies amplify the risk, making it imperative to prioritize security measures now.
Technology
You might also like
- European SpaceX competitor secures $160 million for reusable spacecraft to transport astronauts and cargo to orbit.
- Palantir experiences a 9% increase and sets a new record following Nasdaq announcement.
- Super Micro faces delisting from Nasdaq after 85% stock decline.
- Elon Musk's xAI is seeking to raise up to $6 billion to purchase 100,000 Nvidia chips for Memphis data center.
- Despite a miss on sales, Alibaba's premarket stock rises 3%.