Fortune just quoted ex-Palantir New York Assemblymember Alex Bores on deep fakes. He says they are “a solvable problem” using the Coalition for Content Provenance and Authenticity (C2PA) standard that cryptographically signs media files with tamper-evident credentials.
Bores’ cup is half full. That’s not a real solution, though. The half he’s missing is the half that matters most in deepfakes.
Abusing an HTTPS Analogy
Bores lazily compares C2PA adoption to the shift from HTTP to HTTPS in the 1990s:
It’d be like going to your banking website and only loading HTTP, right? You would instantly be suspect.
The analogy is instructive, and not in the way he intends. HTTPS works in very discrete ways:
- A centralized trust hierarchy (certificate authorities) decides who gets certificates preloaded and trusted
- The check is simply valid certificate or not, a binary test
- Browsers now enforce it automatically, meaning users no longer choose
- The failure mode has been made abundantly clear: no glowing padlock, no transaction
For C2PA to work the same way, platforms would need to refuse to display unsigned content. That breaks the web, ending up the opposite of HTTPS, because it rises up to a digital rights management issue far above the infrastructure. It also creates a two-tier system where institutional media gets a pay-to-speak trust signal and everyone else gets suspicion and cancelled by default.
Even Bores acknowledges the adoption problem:
The challenge is the creator has to attach it.
Like, self-signed?
HTTPS succeeded because servers had to implement or get regulated out of existence. That’s like saying gas pipes can’t leak, rather than the gas has to be quality content. CardSystems was crushed for failures to stop leaks. PCI DSS dropped a hammer on all payment card transactions everywhere if they were leaky. HTTPS was mandated on the transit path with an absolute ban threat.
In 2009 Google called me into their offices and begged for help continuing use of broken and insecure SSL, because they thought I could lobby for them to stop PCI mandating strong HTTPS. They liked leaky pipes. Talk about regulatory authority forcing innovation. Google lost that argument, big time. And I certainly didn’t take their money for dirty work. I told Google to stop being a jerk and help PCI help them protect their own users. There’s simply no equivalent forcing function like this for individual content creators.
Verification Isn’t Perception: Age of the Integrity Breach
The deep problem is that C2PA solves for the wrong layer. Cryptographic provenance answers whether content is signed by a trusted source. It does nothing for the integrity of a viewer, whether they accurately perceive what they see.
As I wrote yesterday, the people who are most vulnerable to synthetic face manipulation are those with the least levels of perceptual training. Beware the isolated communities behind in the “face-space” race, who never developed the dimensions to distinguish unfamiliar groups.
They can’t detect fakes because they never learned to see the reals.
And a C2PA warning label ain’t gonna fix that.
- Labeled fakes still cause harm. Bores himself notes that “even a clearly labeled fake can have real-world consequences” for deepfake porn victims. The label doesn’t undo the perceptual and emotional damage.
- Signed content can still deceive. Authentic footage, cryptographically verified, can be selectively edited, deceptively framed, or presented without context. C2PA tells you the file wasn’t tampered with. It doesn’t tell you whether the framing is honest.
- The viewer still has to see. If you can’t distinguish faces from unfamiliar ethnic groups, you can’t evaluate whether signed footage of “protestors” or “terrorists” or “immigrants” actually shows what the caption claims.
A Tale of Two Problems
Bores is right about the obvious stuff, infrastructure matters. C2PA still should be the default in devices like cameras and even phones. Platforms should surface provenance data. Institutions should require it for evidentiary contexts.
But infrastructure solves an institutional problem related to journalism, courts, banking, and official communications. It doesn’t solve for the human problem.
The cup is half empty because the human problem is perceptual poverty. The solution isn’t going to be cryptographic. It’s exposure – structured, high-volume exposure that builds out the perceptual dimensions people need to see what they’re looking at.
C2PA answers: “Should I trust a source?”
Perceptual training answers: “Can I see what’s actually in front of me?”
Both questions matter, yet Bores is only asking the far less important one.