According to Fortune, New York Assemblymember Alex Bores, a Democrat running for Congress in Manhattan’s 12th District, argues that AI deepfakes are a “solvable problem” using a decades-old cryptographic technique. He points to the free, open-source C2PA (Coalition for Content Provenance and Authenticity) standard, which can attach tamper-evident credentials to images, video, and audio to show if content is real or AI-generated. Bores, a former Palantir data scientist, just had his RAISE Act signed into law last Friday, imposing safety and reporting requirements on “frontier” AI labs like Meta, Google, OpenAI, Anthropic, and XAI. This has triggered a backlash from a pro-AI super PAC, reportedly backed by tech investors, which has pledged millions to defeat him in the 2026 primary. He argues that compliance for big tech would be minimal, akin to hiring “one extra full-time employee,” and that the goal is to make cryptographic proof the default for legitimate media.
The HTTPS for Content
Here’s the thing about Bores’ argument: it’s compellingly simple. We solved the “how do we trust this website?” problem for banking with HTTPS and digital certificates. His push is to do the same for media with C2PA. It’s not a magic bullet—it requires creators and platforms to adopt it by default—but the vision is clear. If most real photos from news agencies or official channels carry a verifiable cryptographic seal, then anything without it becomes inherently suspect. Just like you’d freak out if your bank’s site wasn’t HTTPS. The technical foundation is there. The real battle is making it ubiquitous, which is a political and economic fight, not a scientific one.
Laws vs. Labels
But Bores isn’t naive. He stresses that technical labels are only one piece. You still need laws that explicitly ban the most harmful uses, like the New York law against non-consensual deepfake images. And that’s where the friction is. He warns that state-level laws risk being kneecapped by a new federal push to preempt state AI rules. So it’s a two-front war: build the technical infrastructure for trust *and* defend the legal frameworks that punish bad actors. Relying on just one seems doomed to fail.
The Industry Backlash
Now, the backlash to his RAISE Act is telling. A super PAC pledging millions to unseat him? That’s a serious reaction. It shows that even modest, targeted regulation—focusing on a handful of giant “frontier” labs—is seen as a threat. Bores frames it as just systematizing voluntary commitments these companies already made. But the industry’s violent response suggests any mandated check, any required disclosure of a “critical safety incident,” is a line they don’t want crossed. It makes you wonder: if compliance is so cheap, why the multi-million-dollar political counterattack?
A Solvable Problem?
So, is it really solvable? Technically, yes. The tools exist. Politically and socially? That’s the murky part. Getting global platforms to default to C2PA, educating billions of users to look for a seal they don’t understand, and navigating a brutal political fight over regulation is a herculean task. Bores is betting his congressional campaign on this idea. Whether he’s right or just optimistic might be one of the defining tech-policy stories of the next few years. You can hear his full case on the Bloomberg Odd Lots podcast.
