Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

AI pilot limbo: The hidden AI governance crisis undermining enterprise adoption

The news: Generative AI (genAI) has become standard across US enterprises—95% of companies report using it to some extent, up from 83% a year ago, per Bain & Co—but wider enterprise adoption is hitting roadblocks. A lack of robust governance and the need for continuous security validation are getting in the way.

Most organizations are stuck in AI pilot limbo, unable to push promising use cases into full production scale, Chatterbox Labs CEO Danny Coleman and CTO Stuart Battersby told The Register.

“Enterprise adoption is only like 10% today. McKinsey is saying it’s a $4 trillion market. How are you actually ever going to move that along if you keep releasing things that people don't know are safe to use?” Coleman said.

Challenges to adoption: Executives and CIOs won’t greenlight AI deployment until models are provably safe. 

Chatterbox Labs says relying on content filters and vendor assurances isn’t enough—AI’s non-deterministic behavior demands tailored, iterative testing for specific enterprise cases.

Some roadblocks to deployment: 

  • 49% of US ad industry professionals cite the need for a clear list of approved AI use cases, per IAB.
  • While AI pilots show promise, few pass rigorous risk and governance markers to make it into production. 
  • AI projects flounder not for lack of model capability, but because traditional corporate security standards don’t address AI’s unique behaviors.
  • The rapid release cycle of new AI models is compounding the problem, making it difficult for enterprises to keep up with testing and adopting AI solutions as new ones come out.

Potential solutions: Companies should define approved AI use cases, require ongoing validation, and regularly audit tools and outcomes for bias to align tools with goals and build trust.

  • 49% develop strategic roadmaps for AI use over time, per IAB.
  • 40% create defined key performance indicators ( KPIs) specifically for AI solutions.
  • Only 21% are embedding testing, validation, and risk mitigation into their adoption plans, and only 13% run regular fairness/ bias audits and debiasing.

Our take: To escape limbo, enterprises must shift from experimentation to disciplined execution. That means building AI governance into the foundation—not as an afterthought. Security, transparency, and trust must be embedded into every AI deployment.

Businesses shouldn’t just see AI as a plug-and-play solution without vetting it and aligning it with desired outcomes. For marketers, campaigns built on shaky AI foundations risk brand reputation, compliance failures, and consumer mistrust. 

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account