Time to shine a light on shadow AI

By Tim Freestone, Chief Strategy and Marketing Officer at Kiteworks.

  • 1 month ago Posted in

Whilst companies are right to encourage their teams to find innovative usages of generative artificial intelligence (GenAI) to streamline workflows, many employees are using the technology in ways that are not being sanctioned by their employers. This so called “shadow AI” is a problem that is not going to go away any time soon. A recent study from Deloitte found that only 23% of those who have used GenAI at work believe their manager would approve of how they’ve used it. This unsanctioned use of AI should not be encouraged. After all, it could put an organisation in serious legal, financial, or reputational risk.

Something needs to change. Nearly one-third of employees admit to placing sensitive data into public GenAI tools. Yet, in the same study, 39% of respondents say that the potential leak of sensitive data is a top risk to their organisation’s use of public GenAI tools.

A tool for all

But how did we get here? The step change in AI adoption happened with the launch of ChatGPT. From that point forth it wasn’t just a tool for technologists, but a tool for all. It was a collective ‘aha’ moment. Now, the use of AI has become almost as ubiquitous in our everyday lives as brushing our teeth in the morning. 

In the past 12 months, we have seen organisations across nearly every industry deriving business value from AI. In fact, in a recent McKinsey Global Survey, 65% of respondents reported that their organisations are now regularly using the technology, nearly double the percentage ten months previous. Respondents’ expectations for GenAI’s impact were highly positive, with three-quarters predicting that it would lead to significant or disruptive change in their industries in the years ahead.

Applying zero trust principles into the data layer

Most of the enterprise GenAI solutions being built are being designed to leverage already available data. However, with much of this data being sensitive in nature, it is important that organisations take no chances. It is time for a shift in thinking and for businesses to think of GenAI solutions as a machine that moves data. The top priority should, therefore, be how the data is being controlled both going into the system and when it comes out the other side. 

Businesses need to apply zero trust principles into this data layer. A zero trust model operates on the principle of maintaining rigorous verification, never assuming trust, but rather confirming every access attempt and transaction. This shift away from implicit trust is crucial. By embedding zero trust principles throughout generative architectures will offer a proactive path to enabling accountability, safety, and control.

The democratisation of data 

Part of the reason for the shadow IT epidemic is that the technology has thus far outpaced the need to secure it. Whilst some organisations know the risks, the knowledge has not yet percolated out. AI has been the democratisation of data leverage. Before GenAI, a business had to have technology sitting in front of a database to get to the data held within. Plus, someone who knew how to use it. Now, the only barriers to leveraging data is whether you know the alphabet. Because of this, the likelihood of data going outside of the business markedly increases.

Whilst a business can take steps to secure the technology and the data, there are always human beings in the loop. Training and education help, but we as a species remain incredibly flawed. 

Brining AI out of the shadows

As long as GenAI is a tool that staff can use to help them reach their goals they will take advantage of it. The use of AI is not going to go away. And nor should it. AI is great for automating tasks, handling big data, facilitate decision-making, reducing human error, and further our understanding of the world around us. However, education of best practices and how to responsibly use AI is needed.

Least privilege access, always on monitoring, and never trust, always verify have been in place at the technology layer for some time. It is now the time to bring these principles down to the data itself. Thankfully, help is at hand. With a Private Content Network, organisations can protect their sensitive content more effectively in this era or AI. The best solutions provide content-defined zero trust controls, featuring least-privilege access defined at the content layer and next-gen DRM capabilities that block downloads from AI ingestion. They also themselves employ AI to detect anomalous activity – for example, sudden spikes in access, edits, sends, and shares of sensitive content. This will help shine a light on any unsanctioned activity going on in the shadows so that a business can remain compliant. 

By Gert-Jan Wijman, Celigo Vice President and General Manager, Europe, Middle East and Africa.
By Mike Bollinger, Global Vice President Strategic Initiatives, Cornerstone.
By Nicholas Borsotto, WW AI Business Lead and Head of Lenovo AI Innovators Program.
By Aleksi Helakari, Head of Technical Office, EMEA - Spirent.
By Yiannis Antoniou, Head of Data, AI, and Analytics at Lab49.