AI Apocalypse: Misaligned Objectives or Poor Quality Control?

An author claims to have distilled down AI risks of great importance, which they refer to as “Misaligned objectives

The point is: nobody ever intends for robots that look like Arnold Schwarznegger to murder everyone. It all starts off innocent enough – Google’s AI can now schedule your appointments over the phone – then, before you know it, we’ve accidentally created a superintelligent machine and humans are an endangered species.

Could this happen for real? There’s a handful of world-renowned AI and computer experts who think so. Oxford philosopher Nick Bostrom‘s Paperclip Maximizer uses the arbitrary example of an AI whose purpose is to optimize the process of manufacturing paperclips. Eventually the AI turns the entire planet into a paperclip factory in its quest to optimize its processes.

Ok, first it is false to say nobody intends for robots to murder everyone. Genocide is a very real thing. Mass murder is a very real thing. Automation is definitely part of those evil plans.

Second, it seems to me this article misses the point entirely. Mis-aligned objectives may in fact be aligned yet unexpected or sloppy in the ways people are reluctant to revise and clarify.

It reminds me of criticisms in economics of using poor measurements on productivity, which in Soviet Russia was a constant problem (e.g. window factories spit out panes that nobody could use). Someone is benefiting from massive paperclip production but who retains authorization over output?

If we’re meant to be saying a centrally-planned centrally-controlled system of paperclip production is disastrous for eveyrone but dear leader (in this case an algorithm), we might as well be talking about market theory texts from the 1980s,

Let’s move away from these theoretical depictions of AI as future Communism and instead consider a market based application today of automation that kills.

Cement trucks have automation in them. They repeatedly run over cyclists and kill because they operate with too wide a margin of error in society with vague accountability, not due to mis-aligned objectives.

Take for example how San Francisco has just declared a state of emergency over pedestrians and cyclists dying from automated killing machines roaming city streets.

As of August 31, the 2019 death toll from traffic fatalities in San Francisco was 22 people — but that number doesn’t include those who were killed in September and October, including Pilsoo Seong, 69, who died in the Mission last week after being hit by a truck. On Tuesday, the Board of Supervisors responded to public outcry over the issue by passing a resolution to declare a state of emergency for traffic safety in San Francisco.

Everyone has similar or same objectives of moving about on the street, it’s just that some are allowed to operate with such low levels of quality that they can indiscriminately murder others and say it’s within their expected operations.

I can give hundreds of similar examples. Jaywalking is an excellent example, as machines already have interpreted that racist law (human objective to criminalize non-white populations) as license to kill pedestrians without accountability.

Insider-threat as a Service (IaaS)

1961 film based on the “true story of Ferdinand Waldo Demara, a bright young man who hasn’t the patience for the normal way of advancement finds that people rarely question you if your papers are in order.”
During several presentations and many meetings this past week I’ve had to discuss an accumulation of insider threat stories related to cloud service providers.

It has gotten so bad that our industry should stop saying “evil maid” and say “evil SRE” instead. Anyway, has anyone made a list? I haven’t seen a good one yet so here’s a handy reference for those asking.

First, before I get into the list, don’t forget that engineers are expected to plan for people doing things incorrectly. It is not ok for any cloud provider to say they accept no responsibility for their own staff causing harm.

Do not accept the line that victims should be entirely responsible. Amazon has been forced to back down on this before.

After watching customer after customer screw up their AWS S3 security and expose highly sensitive files publicly to the internet, Amazon has responded. With a dashboard warning indicator…

And in the case of an ex-AWS engineer attacking a customer, the careless attitude of Amazon has become especially relevant.

“The impact of SSRF is being worsened by the offering of public clouds, and the major players like AWS are not doing anything to fix it,” said Cloudflare’s Evan Johnson. Now senators Ron Wyden and Elizabeth Warren have penned an open letter to the FTC, asking it to investigate if “Amazon’s failure to secure the servers it rented to Capital One may have violated federal law.” It noted that while Google and Microsoft have both taken steps to protect customers from SSRF attacks, “Amazon continues to sell defective cloud computing services to businesses, government agencies and to the general public.”

Second, a more complicated point regularly put forth by cloud companies including Amazon is how “at the time” means they can escape liability.

When an Uber driver brutally killed a little girl in a crosswalk, Uber tried to claim their driver “at the time” wasn’t their responsibility.

Attorneys for Uber said the ride-sharing company was not liable … because the driver was an independent contractor and had no reason to be actively engaged with the app at the time. […] “It’s their technology. They need to make it safe,” [family attorney] Dolan said in January, suggesting that a hands-free mode would bring the business into compliance with distracted-driving laws.

Uber paid the family of the dead girl an undisclosed sum a year later “to avoid a trial about its responsibility for drivers who serve its customers”.

Lyft ran into a similar situation recently when “rideshare rapists” around America operated “at the time” as private individuals who just happened to have Lyft signage in their car when they raped women. The women were intoxicated or otherwise unable to detect that drivers “at the time” were abusing intimate knowledge of gaps in a cloud service trust model (including weak background checks).

Imagine a bank telling you they have no responsibility for lost money in your account because “at the time” the person stealing from you was no longer their employee. It’s not quite so simple, otherwise companies would fire criminals in process and be able to instantly wash their hands without stopping actual harm.

Let’s not blindly walk into this thinking insiders, such as SRE, are off the hook when someone throws up pithy “at the time” arguments.

And now for the list

1) AWS engineer in 2019

The FBI has already arrested a suspect in the case: A former engineer at Amazon Web Services (AWS), Paige Thompson, after she boasted about the data theft on GitHub.

Remember how I said “at the time” arguments should not easily get people off the hook? The former engineer at AWS, who worked there around 2015-2016, used exploits known since 2012 on servers in 2019. “At the time” gets fuzzy here.

Let the timeline sink in, and then take note customers were making mistakes with Amazon for many years, and their staff would have front row seats to the vulnerabilities. Other companies treated the same situation very differently. “Little Man in My Head” put it like this:

The three biggest cloud providers are Amazon AWS, Microsoft Azure, and Google Cloud Platform (GCP). All three have very dangerous instance metadata endpoints, yet Azure and GCP applications seem to never get hit by this dangerous SSRF vulnerability. On the other hand, AWS applications continue to get hit over and over and over again. Does that tell you something?

Yes, it tells us that an inside engineer working for Amazon any time after 2012 would have seen a lot of vulnerabilities sitting ripe for the picking, unlike engineers at other cloud providers. Boasting about theft is what got the former engineer caught, which should make us wonder what about the engineers who never boasted?

2) Trend Micro engineer 2019

In early August 2019, Trend Micro became aware that some of our consumer customers running our home security solution had been receiving scam calls by criminals impersonating Trend Micro support personnel. The information that the criminals reportedly possessed in these scam calls led us to suspect a coordinated attack. Although we immediately launched a thorough investigation, it was not until the end of October 2019 that we were able to definitively conclude that it was an insider threat. A Trend Micro employee used fraudulent means to gain access to a customer support database that contained names, email addresses, Trend Micro support ticket numbers, and in some instances telephone numbers.

3) Google engineer in 2010

Google Site Reliability Engineer (SRE) David Barksdale was recently fired for stalking and spying on teenagers through various Google services…

And in case you were wondering how easily a Google SRE could get away with this kind of obvious bad stalking behavior over the last ten years, in 2018 the company said it was starting to fire people in management for crimes and nearly 50 were out the door:

…an increasingly hard line on inappropriate conduct by people in positions of authority: in the last two years, 48 people have been terminated for sexual harassment, including 13 who were senior managers and above.

Do you realize how huge the investigations team has to be to collect evidence let alone fire 48 people over two years for sexual harassment?

4) Uber employees in 2016

Uber’s lack of security regarding its customer data was resulting in Uber employees being able to track high profile politicians, celebrities, and even personal acquaintances of Uber employees, including ex-boyfriends/girlfriends, and ex-spouses…

5) Twitter engineers 2019

Ahmad Abouammo and Ali Alzabarah each worked for the company from 2013 to 2015. The complaint alleges that Alzabarah, a site reliability engineer, improperly accessed the data of more than 6,000 Twitter users. […] Even after leaving the company, Abouammo allegedly contacted friends at Twitter to facilitate Saudi government requests, such as for account verification and to shutter accounts that had violated the terms of service.

6) Yahoo engineer 2018

In pleading guilty, Ruiz, a former Yahoo software engineer, admitted to using his access through his work at the company to hack into about 6,000 Yahoo accounts.

Just for the record, Ruiz was hired into Okta (cloud identity provider) as their SRE for eight months before be pleaded guilty to massive identity-theft crimes at his former employer.

7) Facebook engineer 2018

Facebook fires engineer who allegedly used access to stalk women. The employee allegedly boasted he was a ‘professional stalker.’

9) Lyft employees 2018

…employees were able to use Lyft’s back-end software to “see pretty much everything including feedback, and yes, pick up and drop off coordinates.” Another anonymous employee posted on the workplace app Blind that access to clients’ private information was abused. While staffers warned one another that the data insights tool tracks all usage, there seemed to be little to no enforcement, giving employees free reign over it. They used that access to spy on exes, spouses and fellow Lyft passengers they found attractive. They even looked up celebrity phone numbers, with one employee boasting that he had Zuckerberg’s digits. One source admitted to looking up their significant other’s Lyft destinations: “It was addictive. People were definitely doing what I was.”

9) AirBnB employees 2015

Airbnb actually teaches classes in SQL to employees so everyone can learn to query the data warehouses it maintains, and it has also created a tool called Airpal to make it easier to design SQL queries and dispatch them to the Presto layer of the data warehouse. (This tool has also been open sourced.) Airpal was launched internally at Airbnb in the spring of 2014, and within the first year, over a third of all employees at the company had launched an SQL query against the data warehouse.

10) And last but definitely not least, in a story rarely told accurately, City of San Francisco engineer 2008

Childs made national headlines by refusing to hand over administrative control to the City of San Francisco’s FiberWAN network, [a cloud service] which he had spent years helping to create.

One of the most peculiar aspects of the San Francisco case was how the network was sending logs (and perhaps even more, given a tap on core infrastructure) to a cluster of encrypted linux servers, and only Terry Childs had keys. He installed those servers in a metal cabinet with wood reinforcements and padlocks outside the datacenter and behind his desk. Holes were drilled in the cabinet for cable runs back into the datacenter, where holes also were drilled to allow entry.

The details of his case are so bizarre I’m surprised nobody made a movie about it. Maybe if one had been made, we’d have garnered more attention about these insider risks to cloud than the book we published in 2012.

So who would be the equivalent today of Tony Curtis?


Updated December 2019 with yet another Facebook example

11) Journalists had to notify the massively wealthy company that insiders were taking bribes to easily circumvent security controls to intentionally harm users:

A Facebook employee was paid thousands of dollars in bribes by a shady affiliate marketer to reactivate ad accounts that had been banned due to policy violations, a BuzzFeed News investigation has found. A company spokesperson confirmed that an unnamed employee was fired after inquiries from BuzzFeed News sparked an internal investigation.

USAF Plans Around PsyOps Leaflet Dispersal Shortfall

Somehow I had missed some leaflet planning considerations over the past few years.

…shortfall on leaflet dispersal capability will jeopardize Air Force Central Command information operations,” said Earl Johnson, B-52 PDU-5/B project manager. The “Buff” can carry 16 PDU-5s under the wings, making it able to distribute 900,000 leaflets in a single sortie.

That’s a lot of paper.

Discussing leaflet drops the other night with a pilot set me straight on the latest methods and he emphasized some recent capability for computerized micro-target leaflet fall paths. It sounded too good to be true. I still imagine stuff floating everywhere randomly like snowflakes.

Then again he might have been pulling my leg, given how we were talking about psyops, or the person telling him.

Speaking of psyops, OpenAI is said to have leaked their monster into the world. We’re warned already it could unleash targeted text at scale even a B-52 couldn’t touch:

…extremist [use of OpenAI tools] to create ‘synthetic propaganda’ that would allow them to automatically generate long text promoting white supremacy or jihadist Islamis…

And perhaps you can see where things right now are headed with this information dissemination race?

U.S. seen as ‘exporter of white supremacist ideology,’ says counterterrorism official

Can B-52s leaflet America fast enough to convince domestic terrorists to stop generating their AI-based long texts promoting white supremacist ideology?

Is Stanford Internet Observatory (SIO) a Front Organization for Facebook?

A “Potemkin Village” is made from fake storefronts built to fraudulently impress a visiting czar and dignitaries. The “front organization” is torn down once its specific message/purpose ends.

Image Source: Weburbanist’s ‘Façades’ series by Zacharie Gaudrillot-Roy
Step one (PDF): Facebook sets up special pay-to-play access (competitive advantage) to user data and leaks this privileged (back) door to Russia.

(October 8, 2014 email in which Facebook engineer Alberto Tretti emails Archibong and Papamiltiadis notifying them that entities with Russian IP addresses have been using the Pinterest API access token to pull over 3 billion data points per day through the Ordered Friends API, a private API offered by Facebook to certain companies who made extravagant ads purchases to give them a competitive advantage against all other companies. Tretti sends the email because he is clearly concerned that Russian entities have somehow obtained Pinterest’s access token to obtain immense amounts of consumer data. Merely an hour later Tretti, after meeting with Facebook’s top security personnel, retracts his statement without explanation, calling it only a “series of unfortunate coincidences” without further explanation. It is highly unlikely that in only an hour Facebook engineers were able to determine definitively that Russia had not engaged in foul play, particularly in light of Tretti’s clear statement that 3 billion API calls were made per day from Pinterest and that most of these calls were made from Russian IP addresses when Pinterest does not maintain servers or offices in Russia)

Step two: Facebook CEO announces his company doesn’t care if information is inauthentic or even disinformation.

Most of the attention on Facebook and disinformation in the past week or so has focused on the platform’s decision not to fact-check political advertising, along with the choice of right-wing site Breitbart News as one of the “trusted sources” for Facebook’s News tab. But these two developments are just part of the much larger story about Facebook’s role in distributing disinformation of all kinds, an issue that is becoming more crucial as we get closer to the 2020 presidential election. And according to one recent study, the problem is getting worse instead of better, especially when it comes to news stories about issues related to the election. Avaaz, a site that specializes in raising public awareness about global public-policy issues, says its research shows fake news stories got 86 million views in the past three months, more than three times as many as during the previous three-month period.

Step three: Facebook announces it has used an academic institution led by former staff to measure authenticity and coordination of actions (not measure disinformation).

Working with the Stanford Internet Observatory (SIO) and the Daily Beast, Facebook determined that the shuttered accounts were coordinating to advance pro-Russian agendas through the use of fabricated profiles and accounts of real people from the countries where they operated, including local content providers. The sites were removed not because of the content itself, apparently, but because the accounts promoting the content were engaged in inauthentic and coordinated actions.

In other words you can tell a harmful lie. You just can’t start a union, even to tell a truth, because unions by definition would be inauthentic (representing others) and coordinated in their actions.

It’s ironic as well since this new SIO clearly was created by Facebook to engage in inauthentic and coordinated actions. Do as they say not as they do.

The Potemkin Village effect here is thus former staff of Facebook creating an academic front to look like they aren’t working for Facebook, while still working with and for Facebook… on a variation of the very thing that Facebook has said it would not be working on.

For example, hypothetically speaking:

If Facebook were a company in 1915 would they have said they don’t care about inauthentic information in “Birth of a Nation” that encouraged restarting the KKK?

Even to this day Americans are very confused whether the White House of Woodrow Wilson was coordinating the restart of the KKK, and they debate that instead of the obvious failure to block a film with intentionally harmful content intended to kill black people (e.g. huge rise in lynchings and 1919 Red Summer, 1921 Tulsa massacre, etc.).

Instead, based on this new SIO model, it seems Facebook of 1915 would partner with a University to announce they will target and block films of pro-KKK rallies on the basis of white sheets and burning crosses being inauthentic coordinated action.

It reads to me like a very strange us of API as privacy backdoors, as well as use of “academic” organizations as legal backdoors; both seem to mean false self-regulation, in an attempt to side-step dealing with the obvious external pressure to regulate harms from speech.

Facebook perhaps would have said in 1915 that KKK are fine if they call for genocide and the death of non-whites, as long as the KKK known to be pushing such toxic and inauthentic statements don’t put a hood on to conceal their face while they do it.

Easy to see some irony in how Facebook takes an inauthentic position, with their own staff strategically installed into an academic institution like Stanford, while telling everyone else they have to be authentic in their actions.

Also perhaps this is a good time to remember how a Stanford professor took large payments from tobacco companies to say cigarettes weren’t causing cancer.

[Board-certified otolaryngologist Bill Fees] said he was paid $100,000 to testify in a single case.


Updated November 12 to add latest conclusions of the SIO about Facebook data provided to them.

Considered as a whole, the data provided by Facebook — along with the larger online network of websites and accounts that these Pages are connected to — reveal a large, multifaceted operation set up with the aim of artificially boosting narratives favorable to the Russian state and disparaging Russia’s rivals. Over a period when Russia was engaged in a wide range of geopolitical and cultural conflicts, including Ukraine, MH17, Syria, the Skripal Affair, the Olympics ban, and NATO expansion, the GRU turned to active measures to try to make the narrative playing field more favorable. These active measures included social-media tactics that were repetitively deployed but seldom successful when executed by the GRU. When the tactics were successful, it was typically because they exploited mainstream media outlets; leveraged purportedly independent alternative media that acts, at best, as an uncritical recipient of contributed pieces; and used fake authors and fake grassroots amplifiers to articulate and distribute the state’s point of view. Given that many of these tactics are analogs of those used in Cold-War influence operations, it seems certain that they will continue to be refined and updated for the internet era, and are likely to be used to greater effect.

One thing you haven’t seen and probably will never see is the SIO saying Facebook is a threat, or that privately-held publishing/advertising companies are a danger to society (e.g. how tobacco companies or oil companies are a danger).