According to Fortune, AI security startup Cyera has surpassed $100 million in annual recurring revenue and achieved a valuation exceeding $6 billion in less than two years, serving major clients including AT&T, PwC, and Amgen. CEO Yotam Segev described the current state of AI security as “grim,” with CISOs caught between blocking AI innovation entirely or risking massive data exposure from employees using unauthorized tools like ChatGPT and Copilot. The company recently launched a research lab to study how AI systems interact with sensitive data inside large organizations. Meanwhile, Apple is reportedly finalizing a deal to pay Google about $1 billion annually for a 1.2-trillion-parameter AI model to overhaul Siri. Mark Zuckerberg and Priscilla Chan have also restructured their philanthropy to focus on AI and science initiatives.
The AI Security Crisis Nobody’s Talking About
Here’s the thing that really stood out from this Fortune piece: we’re in this weird transitional period where everyone knows AI is coming, but nobody’s really prepared for the security implications. Segev compared his company to “Levi’s in the gold rush” – basically, when everyone’s rushing toward something shiny, someone needs to provide the fundamental infrastructure to keep things from falling apart.
The core problem is brutally simple. Employees are feeding sensitive company data into public AI tools, often without approval or against policy. And CISOs are stuck making impossible choices. Do they block AI entirely and become the innovation police? Or do they allow it and potentially expose customer data, trade secrets, and regulated information? It’s a no-win situation that’s creating massive tension in organizations.
The Privilege of Being Regulated
This is where it gets really interesting. Segev noted that regulated companies in healthcare, finance, and telecom actually have an advantage. They can push back and say “we’re not ready” because they have compliance requirements as cover. But everyone else? They’re getting trampled by the AI wave.
Think about that for a second. The companies with the most to lose from data exposure are actually in the best position to slow things down. Meanwhile, less regulated businesses are diving headfirst into AI without proper safeguards. It’s creating this bizarre situation where the companies that should be most cautious are actually taking the biggest risks.
The Coming AI Agent Crisis
Wittenberg dropped what might be the most concerning insight: right now, we’re dealing with “knowledge systems” that you can still contain. But once AI agents start taking autonomous actions and talking to each other? That’s when the real trouble begins.
He predicts we’re only a couple years away from widespread enterprise deployment of these autonomous agents. And honestly, that timeline feels optimistic. The security industry is already struggling to keep up with current AI risks – how are we supposed to secure systems that can make independent decisions and coordinate with other AIs?
Wittenberg’s comment about hoping “the world will move at a pace that we can build security for it in time” is telling. It’s basically an admission that we’re racing against a clock nobody set. The pressure on security teams is immense, and as Sharon Goldman notes on her Twitter, companies need all the help they can get.
What Comes Next?
Looking at the broader landscape, it’s clear we’re at an inflection point. With Apple reportedly paying Google billions for AI models and Zuckerberg shifting his philanthropy toward AI research, the momentum is undeniable. The question isn’t whether AI is coming – it’s whether security can catch up.
The situation reminds me of the early cloud computing days, where security was an afterthought until major breaches forced everyone to take it seriously. But AI moves faster and has more potential for catastrophic failure. We can’t afford to wait for disaster to strike before getting serious about AI security.
For companies dealing with these challenges, resources like the CISO Pressure Index might provide some context about what their peers are facing. But ultimately, this is a problem that requires fundamentally new approaches to security, not just incremental improvements to existing systems.
