Can Facebook’s CSO be Held Liable for Atrocity Crimes?

Something like this image representing weaponized social media may be the next addition to The Atlantic “Brief Visual History of Weapons”.

New legal research moves us closer towards holding social media executives criminally liable for the Rohingya crisis and other global security failures under their watch:

…this paper argues that it may be more productive to conceptualise social media’s role in atrocity crimes through the lens of complicity, drawing inspiration not from the media cases in international criminal law jurisprudence, but rather by evaluating the use of social media as a weapon, which, under certain circumstances, ought to face accountability under international criminal law.

The Guardian gave a scathing report of how Facebook was used in genocide:

Hate speech exploded on Facebook at the start of the Rohingya crisis in Myanmar last year, analysis has revealed, with experts blaming the social network for creating “chaos” in the country. […] Digital researcher and analyst Raymond Serrato examined about 15,000 Facebook posts from supporters of the hardline nationalist Ma Ba Tha group. The earliest posts dated from June 2016 and spiked on 24 and 25 August 2017, when ARSA Rohingya militants attacked government forces, prompting the security forces to launch the “clearance operation” that sent hundreds of thousands of Rohingya pouring over the border. […] The revelations come to light as Facebook is struggling to respond to criticism over the leaking of users’ private data and concern about the spread of fake news and hate speech on the platform.

The New Republic referred to Facebook’s lack of security controls at this time as a boon for dictatorships:

[U.N. Myanmar] Investigator Yanghee Lee went further, describing Facebook as a vital tool for connecting the state with the public. “Everything is done through Facebook in Myanmar,” Lee told reporters…what’s clear in Myanmar is that the government sees social media as an instrument for propaganda and inciting violence—and that non-government actors are also using Facebook to advance a genocide. Seven years after the Arab Spring, Facebook isn’t bringing democracy to the oppressed. In fact…if you want to preserve a dictatorship, give them the internet.

Bloomberg also around this time suggested Facebook was operating as a mass weapon by its own design, serving dictatorship.

It seems important when looking back at this time-frame to note that a key Facebook executive at the head of decisions about user safety was in just his second year ever as a “chief” of security.

He infamously had taken his first ever Chief Security Officer (CSO) job at Yahoo in 2014, only to leave that post abruptly and in chaos in 2015 (without disclosing some of the largest privacy breaches in history) to join Facebook.

August 2017 was the peak period of risk, according to the analysis above. The Facebook CSO launched a “hit back” PR campaign two months later in October to silence the growing criticisms:

Stamos was particularly concerned with what he saw as attacks on Facebook for not doing enough to police rampant misinformation spreading on the platform, saying journalists largely underestimate the difficulty of filtering content for the site’s billions of users and deride their employees as out-of-touch tech bros. He added the company should not become a “Ministry of Truth,” a reference to the totalitarian propaganda bureau in George Orwell’s 1984.

His talking points read like a sort of libertarian screed, as if he thought journalists are ignorant and would foolishly push everyone straight into totalitarianism with their probing for basic regulation, such as better editorial practices and the protection of vulnerable populations from harms.

Think of it like this: the chief of security says it is hard to block Internet traffic with a firewall because it would lead straight to shutting down the business. That doesn’t sound like a security leader, it sounds like a technologist that puts making money above user safety (e.g. what the Afghanistan Papers call profitability of war).

Facebook’s top-leadership was rolling out angry “shame” statements to those most concerned about lack of progress. He appeared to be expressing that for him to do anything more than what he saw as sufficient in that crucial time would be so hard that journalists (ironically the most prominent defenders of free speech, the people who drive transparency) couldn’t even understand if they saw it.

Take for example one of the many “hit back” Tweets posted by Facebook’s CSO:

My suggestion for journalists is to try to talk to people who have actually had to solve these problems and live with the consequences.

To me that reads like the CSO saying his staff suffer when they have to work hard, calling journalists stupid for not talking with anyone.

Such a patronizing and tone-deaf argument is hard to witness. It’s truly incredible to read, especially when you consider nearly 800,000 Rohingya fleeing for their lives while a Facebook executive says consequences matter.

Compare to what journalists in the field reported at that same exact time of October 2017 — as they talked to people living right then and there with Facebook failing to solve these problems.

Warning: extremely graphic and violent depictions of genocide

Here’s another way to keep the Facebook “hit back” campaign against journalists in perspective. While the top executive in security was calling people closest to real-world consequences not expert enough on that exact topic, he himself didn’t bring any great experience or examples to the table to earn anyone’s trust.

The outspoken and public face of a high-profile risk management disaster was representing Facebook’s dangerously clueless stumbles year after year:

A person with knowledge of Facebook’s [2015] Myanmar operations was decidedly more direct than [Facebook vice president of public policy] Allen, calling the roll out of the [security] initiative “pretty fucking stupid.” […] “When the media spotlight has been on there has been talk of changes, but after it passes are we actually going to see significant action?” [Yangon tech-hub Phandeeyar founder] Madden asks. “That is an open question. The historical record is not encouraging.”

The “safety dial was pegged in the wrong direction” as some journalists put it back in 2017 under a CSO who apparently thought it good idea to complain how hard it has been to protect people from harm (while making huge revenues). Perhaps business schools soon may study Facebook’s erosion of global trust under this CSO’s leadership:

We know tragically today that journalists were repeatedly right in their direct criticism of Facebook security practices and in their demands for greater transparency. We also plainly see how an inexperienced CSO’s personal “hit back” at his critics was wrong, with its opaque promises and patronizing tone based on his fears of an Orwellian fiction.

Facebook has been and continues to be out-of-touch with basic social science. Facebook was and continues to resist safety controls on speech that protect human rights, and has continued saying it is committed to safety while arguing against norms of speech regulation.

The question increasingly is whether actions like an aggressive “hit back” on people warning of genocide at a critical moment of risk (arguing it is hard to stop Facebook from being misused as a weapon and pushing back on criticism of the use of Facebook as a weapon) makes a “security” chief criminally liable.

My sense is it will be anthropologists, experts in researching baselines of inherited rights within relativist frameworks, who emerge as best qualified to help answer questions of what’s an acceptable vulnerability in social media technology.

We see this already in articles like “The trolls are teaming up—and tech platforms aren’t doing enough to stop them“.

The personal, social, and material harms our participants experienced have real consequences for who can participate in public life. Current laws and regulations allow digital platforms to avoid responsibility for content…. And if online spaces are truly going to support democracy, justice, and equality, change must happen soon.

Accountability of a CSO for atrocity crimes during his watch appears to be the most logical change, and a method of reasoned enforcement, if I’m reading these human rights law documents right.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.