Companies are facing growing challenges with the increasing use of shadow AI in the workplace.

Companies are facing growing challenges with the increasing use of shadow AI in the workplace.
Companies are facing growing challenges with the increasing use of shadow AI in the workplace.
  • Tech gatekeepers may overlook crucial unsanctioned information when employees use new AI tools to send and receive data within an organization.
  • Information leaders are trying desperately to control the potential threats posed by shadow AI.
  • Experts suggest that outright bans on AI tools are not the solution, and a better approach involves implementing guardrails and providing education.
Companies are facing growing challenges with the increasing use of shadow AI in the workplace.

Information leaders are striving to curb the unregulated use of artificial intelligence beyond the scope of IT departments as its popularity and application continue to increase.

According to Jay Upchurch, CIO of data analytics platform SAS, shadow AI refers to the AI usage within a company that emerges in "dark corners." These AI systems become important either due to their success or because of a security issue.

Shadow IT and shadow AI are not new, as they are simply the latest iterations of a phenomenon that arises from our human nature of autonomy and authority. According to Tim Morris, chief security advisor at cybersecurity firm Tanium, who has extensive experience in offensive security and incident response, different people will inevitably create their fiefdoms as organizations grow.

The issue with shadow AI is more intricate and hazardous than shadow IT previously was.

Inflated risks, sensitive information leaks

Concerns about governance and security are prevalent in shadow AI, as questions arise about the potential for confidential IP to be exposed through publicly available large language models, copyright infringement, and the disclosure of personally identifiable information about customers.

The risks for software developers are higher when they are part of a smaller company. This is because they must also consider the possibility of AI hallucinations and inaccuracies, as most companies are using free versions of ChatGPT 3.5 or a similar tool, which only includes data trained through January 2022.

Experts and anecdotes suggest that while allowing time for creative tinkering has been effective in increasing innovation within an organization, full reign isn't the solution. Companies including Samsung have experienced sensitive information leaks and Microsoft has had temporary security issues as a result of generative AI deployment.

Morris stated, “Prohibition is ineffective as people do not follow it, and it leads to ostracizing good talent. To retain good talent, simply set boundaries.”

Morris has found that managing cybersecurity teams with offensive tendencies is similar to managing the cast of Ocean's 11.

Morris fosters creativity in a controlled setting by hosting an annual competition where contestants present their inventions.

Mike Scott, CISO of Immuta, stated that most unsanctioned AI usage by employees is not done with malicious intent.

Remote users and cloud-based concerns

Scott suggests that an endpoint security tool is the most practical and scalable solution to the problem of shadow AI. He highlights that the threat of shadow AI is highest with remote users and cloud-based AI platforms, and that technologies like cloud access security brokers can tackle both issues.

Karim advises using tools with privacy and security features, such as Microsoft Azure OpenAI, to control the data that is uploaded and kept private.

Upchurch advises monitoring the flow of data within your organization to detect and prevent unauthorized access or data breaches.

Upchurch notes that while most organizations operate under a controlled allowance, there are exceptions for highly sensitive operations, such as those in defense contracts. In such cases, an outright ban is typically more effective, as the sensitivity of the information requires strict measures to prevent leaks. "At that point, you can't really trust your employees because of the sensitivity of what you're dealing with," Upchurch emphasized.

While only a small portion of the industry will benefit from individual AI solutions, the majority will require a combination of policies, education, and a balanced approach to security strategies. Nevertheless, the potential of AI to revolutionize the industry is undeniable, as Morris stated, "I used to spend a week perfecting a script, which an AI can do in just three minutes."

Upchurch highlights that shadow AI is a legitimate concern, but so is AI itself. He advises that if you don't adopt it, your neighbor will become a rival and steal your lunch money.

Republicans target SEC's proposed AI rule for financial advisors
by Rachel Curry

technology