If You Like Privacy, Then Love Apple Child Protection Measures

Update August 21: Patrick Walsh wrote a post called “unpopular opinion” that agrees with me, and makes almost exactly the same points:

I believe Apple is doing right here not simply because of what they’re doing but because of how they’re doing it, which I think is a responsible and privacy-preserving approach to an important issue.

He even explains in more graphic details why PTSD is a real problem in this space and thus why Apple’s solution is both privacy-preserving and reducing harms.

Update August 19: Nicholas Weaver explains why “hash collision” news is not news at all and the risks are low to non-existent.

The only real risk, says one security researcher, is that anyone who wanted to mess with Apple could flood the human reviewers with false-positives.

All the commentary alleging that “lives will be ruined” by a hash collision are tone-deaf and unrealistic hype. We know and have ample proof that children’s lives will be ruined without these Child Protection Measures.

However there is no meaningful technical or logical reason so far that demonstrates how any innocent adult’s life will be harmed in the process.

Don’t believe the hype. If you’ve read the over-sensationalized nonsense of Alex Stamos, Julian Assange or Edward Snowden you will recognize them in this explanation by a psychologist:

In our social media lives we too often seek intense reaction rather than positive or helpful reactions, leading us to privilege sensationalized content. We want more engagement, not necessarily more grounded facts, so we do what we need to do to garner attention in digital spaces.

Take as just one example how in April of 2018 Alex Stamos claimed to be responsible for security at Facebook yet it was “creating and storing facial recognition templates of 200,000 local users without proper consent” during his watch. That kind of moral failure and doing obvious privacy wrongs should get far more attention.



Let’s get one thing out of the way. Children are being exposed, their images taken cruelly and very serious harms are obvious. It’s awful. It’s criminal.

As someone who personally has investigated such crimes (and will never discuss details where or when), it is nothing you ever want to have to do… yet it is absolutely necessary work to protect victims and stop children’s lives being destroyed by lack of privacy.

Technical security controls are required, especially at scale. We know this as well as any similar control that has been around for decades reducing malware and spam.

Right?

And that’s not even to get into the very different fact that investigating malware or spam doesn’t cause PTSD.

Apple has done exactly the right thing with their Child Protection Measures and put some very old and tried controls in place at scale to help prevent serious rights’ violations — preserving privacy of children.

I am not saying that as an extremist on either end of the privacy versus knowledge spectrum. The middle path here (almost always denied in the excluded middle fallacy by privacy and knowledge extremists) is that we clearly need to have enough knowledge to preserve enough privacy.

Caring about privacy is NOT inconsistent with protecting society (more accurately protecting children) from exposure by investigating violations of their privacy.

Now, I hear in the news many in America are claiming to be upset at this news because they face “losing” some fake notion of “freedom” they had to deny children privacy, to illegally share and distribute known harmful photos of very young strangers.

Let me put this another way.

I know that America is only United Nations member failing to ratify the international treaty that protects its children. The Convention on the Rights of the Child defends every child’s right to survival, education, nurturing, and protection from violence and abuse.

Thus, of course we know why Americans are so up in arms about being blocked by Apple from violence and abuse of children. America remains the only country in the world that has been refusing to agree to do exactly that.

It’s a real problem in America because a small group of very powerful white men spend millions to investigate spam and malware when they think it might hurt someone’s wallet, yet very strongly draw the line and refuse to budge when it comes to protecting children from privacy abuse let alone death.

American tech executives in particular are being hypocritical when they literally block protection only when it is narrowly defined to benefit children.

They claim they care about privacy while denying it to children being harmfully exposed. They claim they care about freedom when denying to children being harmfully controlled.

Hany Farid captures this perfectly in his Newsweek take-down of a cruel and tone-deaf Facebook executive:

Will Cathcart, head of Facebook’s WhatsApp messaging app, had this to say about Apple’s announcement: “I read the information Apple put out yesterday and I’m concerned. I think this is the wrong approach and a setback for people’s privacy all over the world. People have asked if we’ll adopt this system for WhatsApp. The answer is no.” He continued with: “Apple has built software that can scan all the private photos on your phone—even photos you haven’t shared with anyone. That’s not privacy.”

These statements are misinformed, fear-mongering and hypocritical.

While Apple’s technology operates on a user’s device, it only does so for images that will be stored off the device, on Apple’s iCloud storage. Furthermore, while perceptual hashing, and Apple’s implementation in particular, matches images against known CSAM (and nothing else), no one—including Apple—can glean information about non-matching images. This type of technology can only reveal the presence of CSAM, while otherwise respecting user privacy.

Cathcart’s position is also hypocritical—his own WhatsApp performs a similar scanning of text messages to protect users against spam and malware. According to WhatsApp, the company “automatically performs checks to determine if a link is suspicious. To protect your privacy, these checks take place entirely on your device.” Cathcart seems to be comfortable protecting users against malware and spam, but uncomfortable using similar technologies to protect children from sexual abuse.

BOOM!

You couldn’t drop any harder hammer on Facebook management than Farid; proving their executive culture as willfully harmful, a company toxic to society.

Read this again and again:

WhatsApp performs a similar scanning of text messages to protect against spam and malware… but uncomfortable using similar technologies to protect children…

Farid goes on to make many excellent points in his clear-headed centrist article, and from a technical perspective they are all very well stated. He’s right.

However, he also omits the looming cultural factor here, perhaps because it is the biggest elephant to ever stand in the room of privacy.

Who really knows where to begin discussing it properly? I propose starting here: do not report on Apple’s move without mentioning America refusing to ratify rights of the child to protection from abuse.

Rugged individualism, libertarian nonsense of social Darwinism, is a bogus concept and should be called out for exactly what it is.

…social Darwinists hold beliefs that conflict with the principles of liberal democracy, and their vision of social life is not conducive to fostering a cooperative, egalitarian society.

Even worse is when it creeps into these discussions unchallenged by reporters purporting to be talking about privacy rights.

This is leadership in privacy by Apple and is entirely in-step with everything they have been saying and doing.

That’s the proper framing when we hear Americans jump to argue children don’t deserve even basic privacy protection that can save their lives, which Apple is finally bringing to market.

Now I’ll try to explain the technical aspects of the Apple system, which as I said is more a throw-back to old methods already used in many places.

When the big “cloud” services scan images, they typically generate a hash (a small digest of the image) and compare it to hashes of already known bad images (using what’s known as a “CSAM” database).

Apple is not doing this, instead pulling forward the much older model used in things like antivirus software. It pushes a blocklist to each client. That means hashes of known bad images are given to an Apple client so it can try to do a match locally when images are being pushed by the client to Apple’s iCloud.

Very 1980s? The local scan and match model is not new, and it’s not the first.

If a hash is locally matched with the blocklist (meaning a hash for a bad image matches the hash of an image being sent by the client to iCloud) Apple writes an encrypted “voucher” into an iCloud log with a wait state. When a threshold of these vouchers gets reached (said to be around 30) it will trigger decryption and a report for Apple human monitors to verify the bad images.

In other words, cloud companies scan your images in the cloud, which is not great for privacy, also not great for people having to look at bad images. Seriously, PTSD is a big part of this story that privacy extremists never seem to acknowledge even though you’d think it would fit their interests.

Apple however is bringing privacy enhancements to the client by shifting to a local copy of blocklist, and by using scans only if you upload to their iCloud, which is optional.

Having a local client generate a hash and match it locally, sending an encrypted log entry only when using an optional data store is NOT a back door by any reasonable definition.

Apple also is going to scan encrypted iMessages when a family account is set up, such that “pornographic” images are blurred when detected with a warning to children about safety that their parents will be notified if they continue.

This system uses image analysis on the device, known as a machine learning classifier. Of course such systems are flawed, like I’ve spoken about in almost every presentation of mine since at least 2012.

Don’t get too worried Apple will experience failure with machine learning. Everyone does. “AI” is fundamentally broken and that’s the much deeper issue. Their model admits and takes such risks somewhat into account (e.g. with an opt-in parental monitoring system where children have a dialog to bypass).

On that depressing note the latest research has in fact started to define trustworthy machine learning.

Again, NOT a back door by any reasonable definition.

Why would Apple work to protect the privacy of children? Why are they giving people more privacy by scanning on a client instead of in the cloud? Perhaps the answer is obvious?

It’s better privacy.

As a final thought, people often raise the fallacy of slippery slope on this topic. It goes something like “if we allow Apple to deploy technology that protects children from harm, what’s to stop the government next taking over our entire lives?”

Beside being a totally bogus fallacy argument that intentionally ignores all the middle steps, it also begs the question of why children are the line being drawn here. Where were all these advocates when client-side scanning by anti-virus was deployed and what are they saying about it today?

Anyone with insider information on anti-virus knows the messy connections into governments well, yet where’s the outrage and evidence of tyranny from overreach as a result?

We’ve seen almost nothing of the kind (and NO I am not going to tell you where there has been overreach because the fact that you don’t know just proves my point).

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.