Deepfake Training Only Improves Detection 10%

Nautilus might be trying to scare people with the FUD in an article called “Deepfake… Should Scare Us

The most recent study, by psychologist Sophie Nightingale of Lancaster University and computer scientist Hany Farid of the University of California, Berkeley, focused on deepfake images.1 In one online experiment, they asked 315 people to classify images of faces, balanced for gender and race, as real or fake. Shockingly, overall average accuracy was near chance at 48 percent, although individuals’ accuracy varied considerably, from a low of around 25 percent to a high of around 80 percent.

In a follow-up experiment, 219 different people did the same task but first learned about features to look for that can suggest an image is a deepfake, including earrings that don’t quite match, differently shaped ears, or absurdly thin glasses. People also received corrective feedback after submitting each answer. “The training did help, but it didn’t do a lot,” Nightingale told me. “They still weren’t performing much better than chance.” Average accuracy increased to only 59 percent.

My guess is the training to detect fakes wasn’t very good.

That’s an equally valid conclusion, which I don’t see mentioned here. What if training could be introduced that helped more?

To put it another way, does a test and education program about fakes include thinking about people who are blind and don’t trust any visuals? There’s an important clue in the article about why testing vision alone may be a poorly-contrived exercise:

Part of the reason it remains so difficult to make a believably realistic recreation of young Luke is because he has to emote and speak. Even in The Book of Boba Fett, producers clearly recognized the limits of their illusion, frequently cutting away from Luke’s face whenever he had extended dialogue.

So what are they training people on exactly, and why use such a narrow band, when a simple second factor would increase detection rates significantly?

Here’s another clue in the article, which reveals how “risk” analysis can end up exactly backwards:

We’re such expert face detectors that we can’t help but see faces everywhere: in rock formations on Mars, in the headlights and grilles of cars, and in misshapen vegetables.

Nobody seeing faces in misshapen vegetables gets an award for being an expert face detector. That’s a massive contradiction to the entire premise anyone should be scared by the fact that fake faces can look like real ones.

This all goes back to the main point I often try to make, which is society tends to very much like and enjoy fakes until it doesn’t. That has to be kept in mind.

Does going into a theater make it more acceptable to watch people fake other people (acting), as a form of containment, than if they do it on the street or in our homes? Every Halloween I welcome many (admittedly marginal quality) deep fakes into my environment and nobody seems worse for it.

When a person walking up to you says “I’m your father” there are a million data points in your mind evaluating that statement. When someone says “I’m celebrity X depicting fake character Y” there are significantly fewer points to evaluate. And if a researcher asks “Is this picture a real person” there are even fewer points.

Scary? At the end of the day social engineering is a problem yet hardly a new topic, so I often wonder why deep fakes are so exciting to people now instead of many years ago or even decades.

More than 20 years back I had to slide into environments, engineering my way to walk out with someone’s internal hard drive in my hands (setting an exact replica inside their computer instead) without anyone at a facility noticing.

Layers of presenting fake information are a professional exercise across many industries, probably not unlike the kind of medical operations we have come to accept as normal and beneficial (e.g. organ transplants to save a life, plastic surgery to repair burns).

Instead of being scared by the premise of challenging areas of weak trust scaffolding (e.g. looking at someone’s face to determine something), people need to think more broadly about what is really at stake in a society that is scared by fakes.

Here’s a more important and even simpler test:

A black woman sends messages using an undetectable appearance of white male celebrity.

Who does this example scare, and why?

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.