According to PYMNTS.com, a FINRA review of its regulatory programs has found significant and uneven gaps in how financial firms oversee generative AI. The report shows firms are deploying large language models widely across customer service, internal research, and compliance, with 90% of CFOs now seeing very positive ROI. But FINRA observed that many deployments lack formal risk assessments, clear ownership, or proper documentation. Examiners found firms often couldn’t explain which models were in use or how outputs were generated, and many relied on vendor assurances without keeping their own compliance records. The use of AI for drafting client communications was also flagged as a risk area, with firms lacking clear review processes.
The Governance Gap
Here’s the thing: this isn’t a story about AI being inherently dangerous. It’s a story about old-school operational discipline failing to keep up with new tech. FINRA basically found that governance is either informal, fragmented, or just plain missing. Responsibility for AI oversight is often split between tech, compliance, and business teams with no single point of accountability. And that’s a recipe for trouble.
Think about it. If no one clearly owns the risk, who’s on the hook when something goes wrong? The report notes that escalation processes for AI incidents are lacking, and even when firms have started drafting AI policies, they’re often untested and not really enforced. It’s the classic “move fast and break things” ethos crashing into the heavily regulated, accountability-obsessed world of finance. They just don’t mix.
The Vendor Black Box Problem
This is where it gets really sticky. A lot of firms aren’t building their own models; they’re using AI baked into vendor platforms. And FINRA says they often have no clue how those tools handle data, train on prompts, or store outputs. They’re taking the vendor’s word for it. That’s a massive third-party risk.
So you’ve got this double whammy. The firm doesn’t fully understand the tool, and as PYMNTS Intelligence notes, attackers frequently compromise a vendor first to get to the target firm. Relying on a contract alone is not oversight. If you’re inputting customer data or non-public info into a vendor’s AI, and you can’t audit what happens to it, you’re walking a compliance tightrope. It seems like many firms are just hoping nothing goes wrong.
A Recipe For Future Headaches
The other big red flag is the lack of version control. Vendors update their AI models all the time. But FINRA found firms aren’t consistently reassessing risk or updating their own procedures when that happens. A material change in how the AI behaves could go completely unnoticed. How can you supervise a tool if you don’t know what version it is or what it’s capable of this week?
This links directly to the cybersecurity concerns FINRA raised. AI is enabling more sophisticated phishing attacks, but the internal controls to detect AI-generated fraud are underdeveloped. It’s an arms race, and the defense is playing catch-up. When you combine weak governance, opaque vendors, and poor change management, you’re creating a landscape ripe for compliance failures, data leaks, and misleading client communications. The efficiency gains might be real, but the hidden costs could be enormous.
