According to Forbes, enterprises are racing to adopt AI but frequently skip the most critical first step: clearly defining what problem they’re actually trying to solve. Vijay Mehta, Chief Data & Technology Officer at Experian, emphasizes that real enterprise AI success depends on a “plumbing-first” approach that prioritizes data flows, workflows, and governance before diving into algorithms. His team has built governance and monitoring directly into their model lifecycle, with every model being versioned, traceable, and continuously monitored for performance drift. They recently launched the Experian Assistant for Model Risk Management, an AI-powered solution that helps organizations automate documentation and compliance audits while aligning with global regulatory standards like SR 11-7 and SS1/23. Mehta stresses that organizations often get stuck in “pilot purgatory” where AI projects show promise but never fully scale into production because the surrounding infrastructure isn’t ready.
The plumbing problem nobody wants to talk about
Here’s the thing: everyone wants to build the shiny AI model, but nobody wants to talk about the boring infrastructure that makes it actually work. And that’s exactly why so many projects fail. Mehta’s absolutely right that the real work happens behind the scenes – model drift detection, compliance automation, prompt injection protection. These aren’t sexy topics, but they’re what separate expensive experiments from production systems that actually deliver value.
I’ve seen this pattern play out repeatedly. Companies get excited about AI, build a proof of concept that works beautifully in controlled environments, then completely collapse when they try to scale it. Why? Because they treated AI like magic instead of the engineering discipline it actually is. The organizations that succeed are the ones who understand that AI infrastructure needs to be as robust as their financial systems or customer databases.
The reality of pilot purgatory
That term “pilot purgatory” hits painfully close to home for anyone who’s worked in enterprise tech. Basically, it’s where AI initiatives go to die slowly. They show enough promise to keep getting funding, but never enough impact to justify full production deployment. Mehta identifies the core issue: teams optimize for model accuracy instead of real-world deployability.
Think about it – what good is a model that’s 2% more accurate if it requires constant manual intervention, can’t handle real-world data quality issues, and breaks every time you try to update it? Yet this is exactly what happens when data science teams work in isolation without involving operations, compliance, and IT from day one. The technical challenge isn’t the hard part – the organizational alignment is.
Knowing when to walk away
Maybe the most valuable insight here is about knowing when to stop. How many companies have AI projects running that nobody uses, but nobody has the courage to kill? Mehta’s perspective on responsible AI including knowing when to retire models is refreshingly honest.
We’re in this weird phase where companies feel pressure to “do AI” everywhere, even when it doesn’t make sense. But true maturity means being selective – deploying AI only where it genuinely creates value. The organizations that get this right aren’t necessarily the ones with the most AI projects, but the ones whose AI actually gets used and trusted by their teams.
The trust imperative
At the end of the day, this all comes down to trust. Can your business teams trust that the AI will work consistently? Can your compliance team trust that it’s following regulations? Can your customers trust that it’s making fair decisions? Without that foundation of trust, no amount of technical sophistication matters.
Mehta’s approach at Experian – building governance directly into the platform rather than bolting it on later – is exactly what more organizations need to emulate. Because when AI becomes invisible because it just works, that’s when you know you’ve succeeded. Not when you have the fanciest models, but when your teams can rely on AI without even thinking about it.

Your point of view caught my eye and was very interesting. Thanks. I have a question for you.
Your point of view caught my eye and was very interesting. Thanks. I have a question for you.