According to TechRepublic, research by Wiz reveals that 65% of the world’s most valuable AI firms have accidentally exposed their most sensitive digital secrets on GitHub. These industry titans with combined valuations exceeding $400 billion left API keys, authentication tokens, and credentials sitting in plain sight. The exposed material included Langsmith API keys providing organization-level access and enterprise tier ElevenLabs credentials found in plaintext files. One anonymous AI company’s leaked Hugging Face token provided access to approximately 1,000 private AI models. The research found these secrets buried in deleted repositories and developer forks where most security scanners never look, creating what amounts to a long-standing blind spot at the core of the AI boom.
Hidden goldmine for attackers
This isn’t your typical data leak scenario. We’re talking about credentials that could expose organizational structures, proprietary training datasets, and private AI models that these companies have invested millions to develop. One leaked token can grant access to thousands of private models, enabling competitive sabotage, IP theft, and supply chain attacks that ripple through every business built on AI infrastructure.
Here’s the thing that should worry everyone: size and GitHub visibility aren’t reliable indicators of security maturity. Researchers found one company with zero public repositories and just 14 team members still managed to leak sensitive credentials. Meanwhile, the largest company without exposed secrets maintained 60 public repositories and 28 organization members. So you can’t just look at a company’s public footprint and assume they’ve got their security act together.
Why security keeps losing
At the root is that classic tension that keeps winning in tech: speed versus security. AI teams live on rapid prototyping and “share first, fix later” habits. They’re racing to prototype and often store secrets in public repositories, with many missing even basic scanning of deleted forks or development notebooks.
And collaboration makes everything worse. These projects operate in loosely governed, experimentation-driven environments with frequently shared notebooks, models, and repositories. That’s exactly where security protocols buckle under the pressure of rapid iteration. Basically, the very culture that drives AI innovation is systematically undermining basic security practices.
The communication breakdown is particularly alarming. Wiz discovered that nearly half of disclosure attempts either failed to reach their targets or received no response. Many organizations lack clear incident response channels, meaning exposed secrets stay active and exploitable for far too long. How many of these companies even know they’re leaking right now?
The scary bigger picture
This isn’t an isolated problem. GitHub reported over 39 million leaked secrets in 2024 alone, a 67% increase from the previous year. Even more concerning: 70% of secrets leaked in 2022 remain active today. Old keys don’t die – they linger like slow-burning fuses that attackers can light years later.
The fallout from AI leaks hits differently than traditional breaches. They can disrupt multiple organizational levels simultaneously – technology, business, legal, ethical, and strategic competitiveness. Compromise the training process, and you can undermine trust in deployed systems across entire product lines. These are attack paths that traditional software never had to worry about.
Time for a mindset shift
The findings from Wiz’s research should serve as a massive wake-up call for an industry that’s prized shipping speed over security basics. As AI adoption accelerates, developers and security teams need to tighten oversight of development pipelines and secret storage practices.
Companies should implement mandatory secret scanning for public repositories, establish proper disclosure channels, and consider specialized detection for AI-related credentials. But this moment demands more than incremental tweaks – it calls for a fundamental mindset shift in how AI teams build, share, and secure code during the frantic sprint of collaborative prototyping. The alternative is watching billions in IP walk out the digital door while everyone’s too busy building to notice.
