Somewhere in a corner office, an executive read a McKinsey report about AI productivity gains. A week later, the compliance department got a new mandate: adopt AI tools, boost efficiency, stay competitive.
Nobody asked if they were ready. Nobody checked if the data infrastructure could handle it. Nobody verified that the systems would talk to each other.
And now compliance professionals are drowning.
A new survey from Compliance Week and konaAI tells the story in brutal detail. Nearly 200 compliance, ethics, risk, and audit leaders weighed in on AI implementation. The findings aren’t pretty.
The Numbers That Should Worry Every Executive
Let’s start with the headline: 66% of compliance professionals say data quality or access issues are their biggest AI challenge.
Sixty-six percent.
That’s not a minor implementation hiccup. That’s a flashing red warning light that most organizations are trying to run AI on foundations made of sand.
Here’s what executives often miss: AI doesn’t create intelligence out of nothing. It processes whatever data you feed it. Incomplete records, outdated information, siloed systems that don’t communicate—all of that becomes AI output that compliance teams are supposed to trust.
Speaking of trust, only 42% of respondents trust what their AI tools produce. Nearly half remain neutral, which in compliance-speak means “I’m verifying everything manually anyway.”
So much for those productivity gains.
The Integration Disaster
Almost half of survey respondents (49%) flagged poor integration with current systems as a significant problem. This one hits different when you understand how compliance technology works.
Most compliance departments operate on a patchwork of specialized tools accumulated over years. Case management here, regulatory tracking there, document repositories somewhere else, communication archives in another corner. Each system solved a specific problem. None were built to work together.
Now organizations want to layer AI across this Frankenstein’s monster of technology. The result? Workflows that should be streamlined become more complicated. Information that should flow automatically requires manual intervention. Employees spend more time fighting the technology than using it.
The AI tools aren’t broken. They’re just trying to operate in environments that were never designed to support them.
Shadow AI: The Risk Nobody Wants to Discuss
Here’s where things get genuinely concerning: 42% of respondents worry about unknown or unmanaged employee AI use.
When official AI tools are clunky, poorly integrated, or require twelve approvals to access, employees find workarounds. They paste sensitive data into ChatGPT. They upload confidential documents to get quick summaries. They use personal accounts because it’s faster than dealing with IT.
Every shortcut creates risk. Confidential information ends up on external servers. Sensitive data potentially enters AI training sets. Compliance controls get bypassed by people who don’t even realize they’re doing something problematic.
The cruel irony: Organizations implement AI to improve compliance, but the implementation failures drive employees toward AI uses that create compliance violations.
The Top-Down Problem
The survey reveals who’s pushing AI adoption: 48% comes from executive leadership, 15% from boards. Compliance teams themselves? Not the driving force.
This explains a lot.
Executives see AI’s potential. They make strategic decisions based on competitive pressure and efficiency promises. But they’re insulated from ground-level reality—the data quality issues, the integration nightmares, the training gaps, the policy vacuums.
Mohan Krishna from konaAI put it plainly: “This indeed indicates top-down pressure to modernize compliance or risk getting left behind in the company.”
Translation: Adopt AI or look like a dinosaur. Never mind whether you have the infrastructure, expertise, or governance frameworks to do it responsibly.
Regulatory Black Hole
27% of respondents cited regulatory uncertainty as a challenge. Another 29% pointed to inconsistent or nonexistent AI policies within their own organizations.
For compliance professionals, this is existential. Their entire job depends on clear rules and defined boundaries. Without them, they’re guessing—and guessing wrong carries professional consequences.
Federal U.S. regulators have been remarkably quiet on comprehensive AI governance. A few states like California and Colorado have moved forward. Many others are considering legislation. The result is a patchwork that varies by jurisdiction, industry, and use case.
Compliance teams are expected to implement AI while simultaneously predicting what future regulations might require. It’s like being asked to follow a rulebook that hasn’t been written yet.
What Actually Needs to Happen
The survey points toward solutions, even if most organizations aren’t pursuing them.
Data governance must come first. Before deploying any AI tool, organizations need honest assessments of data quality, accessibility, and integration capabilities. This isn’t glamorous work, but it’s prerequisite work.
Training requires real investment. Tutorial videos and lunch-and-learns won’t cut it. Compliance professionals need comprehensive programs that build both technical skills and critical thinking about AI limitations.
Policies can’t wait for regulatory clarity. Internal AI governance frameworks should exist now, addressing authorized uses, data handling, verification standards, and employee expectations.
Most importantly, compliance teams need a seat at the strategy table. Top-down mandates work better with bottom-up input. The people implementing AI daily understand what’s working, what’s failing, and what might help.
Here’s What Nobody in the C-Suite Wants to Hear
The survey’s implicit message deserves explicit statement: Compliance teams aren’t anti-AI. They’re anti-chaos.
They want tools that work, data they can trust, training that prepares them, and policies that guide them. They want to be ready before being required.
Right now, most aren’t. And the executives pushing AI adoption need to understand that readiness isn’t optional—it’s the difference between transformation and expensive disaster.
The compliance team didn’t ask for AI. But if leadership wants AI to actually work, they might want to start asking the compliance team what they need.

