RSA SF Conference (RSAC SF) Day One: Misinformation is the New Malware

Here are my notes from the three-person panel at RSA moderated by Ted Schelin (silicon valley venture capitalist), which to my ears was itself dangerously pushing misinformation. Not only was it far too American in focus (literally asking the world to serve U.S. corporate interests), far too fluffy in pushing unregulated corporations as competent and good while regulators are clumsy and bad, but it made an obvious fatal flaw by suggesting that breaking windows is good for them and their business:

Yoel Roth, academic and ex-Twitter staff

Three part test for censorship

  1. Does the opinion advance a statement of fact
  2. Is this provably false according to experts or data
  3. Is it harmful

Three part test was used to decide what to focus on. Full traffic used to be half-billion posts a day, has gone down lately. Three categories of misinformation in the hundreds of millions, which propagated quickly:

  • Healthcare
  • Political
  • Crisis

Significant quantitative challenge even at just hundreds of millions events.

Attackers are using integrated battlefields and you’ve already lost if you don’t consolidate and get rid of internal diversity/delegations. Individual defenses miss how things play out together. The grey hack of the highest profile Twitter accounts using a spearphish of content moderator is an example. Getting credentials is a playbook that could matter a lot, integrated multi-front approach. Twitter staff, Twitter backend systems, Twitter content cannot be looked at except as a unified problem.

We have an obligation to protect users from Tiktok because it is Chinese tool. The question of America banning it is thorny because responsibility of security community to protect users from a bad platform.

What is misinformation? I was hauled in front of Congress because of misunderstanding about misinformation. A cabal of companies getting together to decide what is right/wrong is bad. Companies working together to share information about Russian farms is doable and I did that starting in 2017. We focused on inauthentic behavior by adversaries we didn’t understand (sophisticated). We need to be clear about what we’re working on, or we end up working on very hard problems with messy vocabulary.

Lisa Kaplan, CEO

Disinformation is fast, cheap and easy to do. 2016 Russian attacks have become commonplace now. Anyone can spin up a network. Chinese are getting more aggressive and focused to undermine the fabric of our economy, democracies in our and other societies.

Competitors launch short-seller attacks.

Just like malware but happening out in the open so it can be caught. Organizations can take steps earlier than malware and stop their stock price from tanking.

(NOTE: this is patently untrue, malware is not only observable it is widely shared)

This is going to get significantly worse, but I’m an optimist. The weakest link is always people.

Think about vaccine disinformation as someone trying to make your staff sick, an anti-competitive practice, preventing people from being able to work.

Misinformation is helping organizations collaborate more across internal departments. CISO get to work with other groups like communications, legal and government affairs. 2019 was seen as a communications problem but savvy organizations now see it as a business problem.

There are threats in countries where they don’t have a First Amendment, attacks on U.S. social media platforms have an impact globally. The world doesn’t have one set of laws. We’ll be better off if people around the world come together and work together to help U.S. companies defend themselves against attacks.

Cathy Gellis, Lawyer

Things you want the law to say no to, the law can’t say no to because the First Amendment shows up.

Wrongness happens and a law that forbids being wrong chills speech and you have a problem. Who decides what is wrong? Government deciding is dangerous because politicians making truth rules will be gamed, government offices would become political prizes because they could control speech.

Platforms need latitude to figure out what is wrong, what ideas and users they want to be associated with. The law doesn’t get to tell them yes and no.

Some laws are enabling and protecting things. Section 230 takes the First Amendment and makes it usable, gives platforms the ability to figure out how they want to moderate speech without fear of interference (do the worst or best without oversight).

Section 230 is so misunderstood people want to take it away without understanding the consequences. Solutions are technical, governments should give private actors all the rope they want.

It’s unlikely a government can ban Tiktok because the concerns about it don’t match the regulations. Capturing data and the privacy loss is a concern and could be regulated. The US doesn’t have a privacy regulation at the federal level. Government doesn’t have something coherent or acting on the actual problem. Some regulators are talking about the content quality and trying to regulate what American kids can learn and Tiktok is evading their government’s censorship.

Questions from audience

Banning Tiktok on government-owned devices is breaking our public signs (cut off by moderator) where will support and funding come to fix things?

Kathy: Amuses me that government bans have bad consequences.

What definition should we use for scoping and finding misinformation threats?

Lisa: Don’t you want to know what everyone is talking about on any malicious domain, state actor, criminal network.

You said information can cause harm. What do you mean by harm?

Yoel: Great question. All things are connected. Caution against rigid definitions.

(NOTE: this is a total contradiction to what was said earlier where he wanted a very tight definition and cautioned against being broad)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.