Fortune just quoted ex-Palantir New York Assemblymember Alex Bores on deepfakes. He says fake faces made by AI are “a solvable problem” using the Coalition for Content Provenance and Authenticity (C2PA) standard that cryptographically signs media files with tamper-evident credentials.
Bores’ cup is half full. That’s not a real solution, though. The half he’s missing is the half that matters most in deepfakes.
Abusing an HTTPS Analogy
Bores lazily compares C2PA adoption to the shift from HTTP to HTTPS in the 1990s:
It’d be like going to your banking website and only loading HTTP, right? You would instantly be suspect.
The analogy is instructive, and not in the way he intends. HTTPS works in very discrete ways:
- A centralized trust hierarchy (certificate authorities) decides who gets certificates preloaded and trusted
- The check is simply valid certificate or not, meaning a truly binary test (setting aside algorithms)
- Browsers now enforce it automatically, meaning users no longer choose
- The failure mode has been made abundantly clear: no glowing padlock, no transaction
For C2PA to work the same way, platforms would need to refuse to display unsigned content. That breaks the web, ending up the opposite of HTTPS, because it rises up to a digital rights management issue far above the infrastructure. It also creates a two-tier system where institutional media gets a pay-to-speak trust signal and everyone else gets suspicion and cancelled by default.
Even Bores acknowledges the adoption problem:
The challenge is the creator has to attach it.
Like, self-signed?
HTTPS succeeded because servers had to implement or get regulated out of existence. That’s like saying gas pipes can’t leak, rather than the gas has to be quality content. CardSystems was crushed for failures to stop leaks. PCI DSS dropped a hammer on all payment card transactions everywhere if they were leaky. HTTPS was mandated on the transit path with an absolute ban threat.
In 2009 Google called me into their offices and begged for help continuing use of broken and insecure SSL, because they thought I could lobby for them to stop PCI mandating strong HTTPS. They liked leaky pipes. Talk about regulatory authority forcing innovation. Google lost that argument, big time. And I certainly didn’t take their money for dirty work. I told Google to stop being a jerk and help PCI help them protect their own users. There’s simply no equivalent forcing function like this for individual content creators.
Verification Isn’t Perception: Age of the Integrity Breach
The deep problem is that C2PA solves for the wrong layer. Cryptographic provenance answers whether content is signed by a trusted source. It does nothing for the integrity of a viewer, whether they accurately perceive what they see.
As I wrote yesterday, the people who are most vulnerable to synthetic face manipulation are those with the least levels of perceptual training. Beware the isolated communities behind in the “face-space” race, who never developed the dimensions to distinguish unfamiliar groups.
They can’t detect fakes because they never learned to see the reals.
And a C2PA warning label ain’t gonna fix that.
- Labeled fakes still cause harm. Bores himself notes that “even a clearly labeled fake can have real-world consequences” for deepfake porn victims. The label doesn’t undo the perceptual and emotional damage.
- Signed content can still deceive. Authentic footage, cryptographically verified, can be selectively edited, deceptively framed, or presented without context. C2PA tells you the file wasn’t tampered with. It doesn’t tell you whether the framing is honest.
- The viewer still has to see. If you can’t distinguish faces from unfamiliar ethnic groups, you can’t evaluate whether signed footage of “protestors” or “terrorists” or “immigrants” actually shows what the caption claims.
A Tale of Two Problems
Bores is right about the obvious stuff, infrastructure matters. C2PA still should be the default in devices like cameras and even phones. Platforms should surface provenance data. Institutions should require it for evidentiary contexts.
But infrastructure solves an institutional problem related to journalism, courts, banking, and official communications. It doesn’t solve for the human problem.
The cup is half empty because the human problem is perceptual poverty. The solution isn’t going to be cryptographic. It’s exposure – structured, high-volume exposure that builds out the perceptual dimensions people need to see what they’re looking at.
C2PA answers: “Should I trust a source?”
Perceptual training answers: “Can I see what’s actually in front of me?”
Both questions matter, yet Bores is only asking the far less important one.
The issues with C2PA are numerous, and the differences between the photography case and the web access case have not been understood by the folk that produced the spec, notably the fact that https on the web is only viable because the credentials can have a very short life span, and can be easily revoked and replaced. This is not the case with the C2PA use case, but it’s not addressed by it, iirc, the specification doesn’t deal with credential management, most notably comprimise and revocation, at all. Yet, there is the whole issue of CA compromise, which from the https experience happens. There is the flawed implementation side; it is pretty safe to assume that the camera companies are doing this in-house, rather then hiring cryptography specialists (every other industry has gone down this route). The system cannot work without TPM camera storage, perhaps the Leica system uses one, but the fact that Nikon is rolling this back to older models as FW update suggests this not how it’s being implemented in general, so I expect a sufficiently motivated and resourced parties will be able to circumvent this. But perhaps most critically, even in the absence of an explicit compromise, any encryption has a natural life span as computing power, etc., moves on. In this specific case that matters a great deal, and in 10, 15, 20 years time it will likely become possible to create ‘historical’ images with valid C2PA certification from the time of their putative origin. So in the long run C2PA doesn’t help with the source trust either. All in all, the impedance mismatch between PKI and what C2PA is trying to achieve is pretty big.