Compliance professionals exposed to AI breaches

A recent survey by compliance eLearning and software provider, VinciWorks, has found that only 29% of compliance professionals have implemented specific procedures, training, or preventive measures to guard against Artificial Intelligence (AI) related compliance breaches. The majority (71%) admitted to lacking such protective measures, with 13% having no plans to address this significant gap in their compliance strategy in the near future.

  • 2 months ago Posted in

The survey gathered 269 responses from industry leaders across the UK, USA, and Europe, exploring the perception of risks, industry sentiment, and the level of preparedness to address potential compliance issues associated with AI in the workplace.

As AI-powered tools continue to gain prominence in various industries—embedded in tools as diverse as client due diligence and supply chain management to HR and recruitment—concerns are mounting about potential risks. These risks include serious compliance failures such as discrimination, plagiarism, intellectual property theft, and GDPR violations. Adding to the urgency, the impending landmark regulatory European Union's Artificial Intelligence Act, carrying penalties of up to 7% of global turnover for AI misuse, has raised the stakes for organisations.

The survey found that only 3% of respondents have completed AI training at work as part of their yearly compliance training. And an alarming 82% admitted to either not completing AI training or being uncertain about their current status, of which 19% said they have no intentions of participating in any AI training at work. This revelation underscores a significant shortfall in addressing fraud awareness and prevention within organisations.

“In light of these findings, there is an immediate and critical need for comprehensive AI training and risk mitigation procedures within organisations,” says Nick Henderson-Mayo, Director of Learning and Content at VinciWorks. “With AI regulation on the horizon, there’s an immediate need for businesses to invest in comprehensive AI compliance programmes. Using AI in business can be very helpful in some areas. Still, if employees end up using chatbots to write their reports or feed customer data into an AI without permission, that can cause a serious compliance problem.”

Despite the risks, half of the respondents (51%) expressed optimism about its impact on their industries, with 6% feeling very optimistic. Conversely, 12% acknowledged feeling pessimistic, while the majority (37%) adopted a neutral stance, reflecting the varied perspectives within a cross-section of industries.

The survey also explored individual usage of AI in day-to-day work, revealing that 45% of respondents currently leverage AI technologies somewhere in their business. Of these, 12% reported using AI daily. Equally noteworthy is the 45% who, while not currently using AI, express interest in exploring its potential applications in their roles.

New Barracuda report explores why just 43% of organizations surveyed have confidence in their...
Zero-trust networks deployable, at scale, in as little as 6 minutes, addresses current industry...
RAGroup increases activity by over 300% since its last known attacks in December 2023, entering the...
Bitdefender has launched Bitdefender Voyager Ventures (BVV), a new investment initiative dedicated...
Coveware by Veeam will bring 'industry-leading' cyber-extortion incident response services and...
Zscaler has released the Zscaler ThreatLabz 2024 Phishing Report, which analyzes 2 billion blocked...
Thales has released the 2024 Imperva Bad Bot Report, a global analysis of automated bot traffic...
Egress has launched its third Phishing Threat Trends Report 2024, detailing key trends, new data,...