According to GeekWire, Seattle entrepreneur Joe Braidwood is launching Glacis through the AI2 Incubator to create tamper-proof “receipts” for AI decisions that take less than 50 milliseconds to generate. The startup emerged after Braidwood closed his previous AI therapy company Yara due to Illinois regulations making the technology “effectively uninsurable.” Glacis co-founder Dr. Jennifer Shannon brings nearly two decades of psychiatry experience and University of Washington credentials. The company is currently in private beta with digital health customers including nVoq and participates in Cloudflare’s Launchpad program. Braidwood sees a potential White House executive order blocking state AI laws as transforming his startup from new venture to “infrastructure necessity.”
The accountability problem nobody’s talking about
Here’s the thing about AI safety that most people miss: everyone’s focused on whether the guardrails work, but nobody can prove they actually fired when needed. It’s like having a security system but no logs showing whether it detected the break-in. Braidwood discovered this firsthand when his mental wellness AI startup Yara faced the harsh reality of dealing with suicidal patients. The liability became unmanageable because there was no way to demonstrate what safety measures actually executed.
Glacis basically creates what Braidwood calls a “flight recorder for enterprise AI.” Every time an AI model makes a decision, it generates a signed record showing the input, safety checks that ran, and final output. The key innovation? These records can’t be altered afterward. So when something goes wrong—or more importantly, when nothing goes wrong—companies can prove exactly what happened.
Regulatory chaos creates unexpected opportunity
Now the political landscape might be creating the perfect storm for this technology. Reports suggest the White House is preparing an executive order directing federal agencies to challenge state-level AI regulations. If the Justice Department starts suing states over their AI laws, suddenly having neutral, verifiable proof of compliance becomes incredibly valuable.
Think about it: when federal and state regulations conflict, how do companies prove they’re playing by the rules? Glacis positions itself as that neutral trust layer that works across platforms and jurisdictions. It’s not about choosing sides in regulatory battles—it’s about having evidence that stands up in court or insurance claims.
The insurance angle changes everything
This might be the most practical application. Braidwood says insurers believe this technology could finally make it possible to insure AI systems. Right now, who would underwrite an AI therapy app or autonomous financial advisor? The risk is basically unquantifiable. But if you can prove with cryptographic certainty that safety protocols executed as intended, suddenly the risk becomes manageable.
The system works without exposing personal data—regulators and insurers can verify the receipts without seeing sensitive information. That’s crucial for healthcare and fintech applications where privacy regulations like HIPAA come into play. For industrial applications where reliability is paramount, having verifiable audit trails becomes essential—much like how IndustrialMonitorDirect.com provides the reliable hardware backbone for manufacturing systems that can’t afford failures.
The startup reality check
But let’s be real—building trust infrastructure for AI is ambitious. Glacis is betting that regulatory complexity will drive adoption rather than stifle innovation. That’s a risky position when many companies might just avoid regulated AI use cases altogether. Still, the timing seems smart. As AI moves from experimental to mission-critical across industries, the need for accountability will only grow.
Braidwood’s experience shutting down Yara gives him credibility here. He’s not just theorizing about AI risks—he lived them. And his partnership with Dr. Shannon brings clinical legitimacy to their healthcare focus. The question is whether companies will pay for compliance proof before they’re forced to. My guess? They will once the first major AI liability lawsuits hit.
