Don’t Be an AppleCard: Exposed for Using Sexist Algorithm

Wrecked ship Captain de Kam said “It’s just like losibng a beautiful woman”.
Photograph: Michael Prior

The creator of Ruby on Rails tweeted angrily at Apple November 7th that they were discriminating unfairly against his wife, and he wasn’t able to get a response:

By the next day, he had a response and he was even more unhappy. “THE ALGORITHM”, described similarly to Kafka’s 1915 novel “The Trial“, became the focus of his complaint:

She spoke to two Apple reps. Both very nice, courteous people representing an utterly broken and reprehensible system. The first person was like “I don’t know why, but I swear we’re not discriminating, IT’S JUST THE ALGORITHM”. I shit you not. “IT’S JUST THE ALGORITHM!”. […] So nobody understands THE ALGORITHM. Nobody has the power to examine or check THE ALGORITHM. Yet everyone we’ve talked to from both Apple and GS are SO SURE that THE ALGORITHM isn’t biased and discriminating in any way. That’s some grade-A management of cognitive dissonance.

And the following day he appeals to regulators for a transparency regulation:

It should be the law that credit assessments produce an accessible dossier detailing the inputs into the algorithm, provide a fair chance to correct faulty inputs, and explain plainly why difference apply. We need transparency and fairness. What do you think @ewarren?

Transparency is a reasonable request. Another reasonable request in the thread was evidence of diversity within the team that developed the AppleCard product. These solutions are neither hard nor hidden.

What algorithms are doing, time and again, is accelerating and spreading historic wrongs. The question fast is becoming whether centuries of social debt in forms of discrimination against women and minorities is what technology companies are prepared for when “THE ALGORITHM” exposes the political science of inequality and links it to them.

Woz, founder of Apple, correctly states that only the government can correct these imbalances. Companies are too powerful for any individual to keep the market functioning to any degree of fairness.

Take the German government’s “Datenethikkommission” report on regulating AI, for example, as it was just released.

And the women named in the original tweet also correctly states that her privileged status, achieving a correction for her own account, is no guarantee of a social system of fairness for anyone else.

I care about justice for all. It’s why, when the AppleCard manager told me she was aware of David’s tweets and that my credit limit would be raised to meet his, without any real explanation, I felt the weight and guilt of my ridiculous privilege. So many women (and men) have responded to David’s twitter thread with their own stories of credit injustices. This is not merely a story about sexism and credit algorithm blackboxes, but about how rich people nearly always get their way. Justice for another rich white woman is not justice at all.

Again these are not revolutionary concepts. We’re seeing the impact from a disconnect between history, social science of resource management, and the application of technology. Fixing technology means applying social science theory in the context of history. Transparency and diversity work only when applied in that manner.

In my recent presentation to auditors at the annual ISACA-SF conference, I conclude with a list and several examples of how AI auditing will perform most effectively.

One of the problems we’re going to run into with auditing Apple products for transparency will be (from denying our right-to-repair hardware to forcing “store” bought software) they have been long waging a war against any transparency in technology.

Apple’s subtle, anti-competitive practices don’t look terrible in isolation, but together they form a clear strategy.

The closed-minded Apple model of business is also dangerous as it directly inspires others to repeat the mistakes.

Honeywell, for example, now speaks of “taking over your building’s brains” by emulating how Apple shuts down freedom:

A good analogy I give to our customers is, what we used to do [with industrial technology] was like a Nokia phone. It was a phone. Supposed to talk. Or you can do text. That’s all our systems are. They’re supposed to do energy management. They do it. They’re supposed to protect against fire. They do it. Right? Now our systems are more like Apple. It’s a platform. You can load any app. It works. But you can also talk, and you can also text. But you can also listen to the music. Possibilities emerge based upon what you want.

That closing concept of possibilities can be a very dangerous prospect if “what you want” comes from a privileged position of power with no accountability. In other words do you want to live in a building run by a criminal brain?

When an African American showed up to rent an apartment owned by a young real-estate scion named Donald Trump and his family, the building superintendent did what he claimed he’d been told to do. He allegedly attached a separate sheet of paper to the application, marked with the letter “C.” “C” for “Colored.” According to the Department of Justice, that was the crude code that ensured the rental would be denied.

Somehow THE ALGORITHM in that case ended up in the White House. And let us not forget that building was given such a peculiar name by Americans trying to appease white supremacists and stop blacks from entering even as guests of the President.

…Mississippi senator suggesting that after the dinner [allowing a black man to attend] the Executive Mansion was “so saturated with the odour of the nigger that the rats have taken refuge in the stable”. […] Roosevelt’s staff went into damage control, first denying the dinner had taken place and later pretending it was actually a quick bite over lunch, at which no women were in attendance.

A recent commentary about fixing closed minds, closed markets, and bias within in the technology industry perhaps explained it best:

The burden to fix this is upon white people in the tech industry. It is incumbent on the white women in the “women in tech” movement to course correct, because people who occupy less than 1% of executive positions cannot be expected to change the direction of the ship. The white women involved need to recognize when their narrative is the dominant voice and dismantle it. It is incumbent on white women to recognize when they have a seat at the table (even if they are the only woman at the table) and use it to make change. And we need to stop praising one another—and of course, white men—for taking small steps towards a journey of “wokeness” and instead push one another to do more.

Those sailing the ship need to course correct it. We shouldn’t expect people outside the cockpit to drive necessary changes. The exception is when talking about the governance group that licenses ship captains and thus holds them accountable for acting like an AppleCard.

AI Apocalypse: Misaligned Objectives or Poor Quality Control?

An author claims to have distilled down AI risks of great importance, which they refer to as “Misaligned objectives

The point is: nobody ever intends for robots that look like Arnold Schwarznegger to murder everyone. It all starts off innocent enough – Google’s AI can now schedule your appointments over the phone – then, before you know it, we’ve accidentally created a superintelligent machine and humans are an endangered species.

Could this happen for real? There’s a handful of world-renowned AI and computer experts who think so. Oxford philosopher Nick Bostrom‘s Paperclip Maximizer uses the arbitrary example of an AI whose purpose is to optimize the process of manufacturing paperclips. Eventually the AI turns the entire planet into a paperclip factory in its quest to optimize its processes.

Ok, first it is false to say nobody intends for robots to murder everyone. Genocide is a very real thing. Mass murder is a very real thing. Automation is definitely part of those evil plans.

Second, it seems to me this article misses the point entirely. Mis-aligned objectives may in fact be aligned yet unexpected or sloppy in the ways people are reluctant to revise and clarify.

It reminds me of criticisms in economics of using poor measurements on productivity, which in Soviet Russia was a constant problem (e.g. window factories spit out panes that nobody could use). Someone is benefiting from massive paperclip production but who retains authorization over output?

If we’re meant to be saying a centrally-planned centrally-controlled system of paperclip production is disastrous for eveyrone but dear leader (in this case an algorithm), we might as well be talking about market theory texts from the 1980s,

Let’s move away from these theoretical depictions of AI as future Communism and instead consider a market based application today of automation that kills.

Cement trucks have automation in them. They repeatedly run over cyclists and kill because they operate with too wide a margin of error in society with vague accountability, not due to mis-aligned objectives.

Take for example how San Francisco has just declared a state of emergency over pedestrians and cyclists dying from automated killing machines roaming city streets.

As of August 31, the 2019 death toll from traffic fatalities in San Francisco was 22 people — but that number doesn’t include those who were killed in September and October, including Pilsoo Seong, 69, who died in the Mission last week after being hit by a truck. On Tuesday, the Board of Supervisors responded to public outcry over the issue by passing a resolution to declare a state of emergency for traffic safety in San Francisco.

Everyone has similar or same objectives of moving about on the street, it’s just that some are allowed to operate with such low levels of quality that they can indiscriminately murder others and say it’s within their expected operations.

I can give hundreds of similar examples. Jaywalking is an excellent example, as machines already have interpreted that racist law (human objective to criminalize non-white populations) as license to kill pedestrians without accountability.

Insider-threat as a Service (IaaS)

1961 film based on the “true story of Ferdinand Waldo Demara, a bright young man who hasn’t the patience for the normal way of advancement finds that people rarely question you if your papers are in order.”
During several presentations and many meetings this past week I’ve had to discuss an accumulation of insider threat stories related to cloud service providers.

It has gotten so bad that our industry should stop saying “evil maid” and say “evil SRE” instead. Anyway, has anyone made a list? I haven’t seen a good one yet so here’s a handy reference for those asking.

First, before I get into the list, don’t forget that engineers are expected to plan for people doing things incorrectly. It is not ok for any cloud provider to say they accept no responsibility for their own staff causing harm.

Do not accept the line that victims should be entirely responsible. Amazon has been forced to back down on this before.

After watching customer after customer screw up their AWS S3 security and expose highly sensitive files publicly to the internet, Amazon has responded. With a dashboard warning indicator…

And in the case of an ex-AWS engineer attacking a customer, the careless attitude of Amazon has become especially relevant.

“The impact of SSRF is being worsened by the offering of public clouds, and the major players like AWS are not doing anything to fix it,” said Cloudflare’s Evan Johnson. Now senators Ron Wyden and Elizabeth Warren have penned an open letter to the FTC, asking it to investigate if “Amazon’s failure to secure the servers it rented to Capital One may have violated federal law.” It noted that while Google and Microsoft have both taken steps to protect customers from SSRF attacks, “Amazon continues to sell defective cloud computing services to businesses, government agencies and to the general public.”

Second, a more complicated point regularly put forth by cloud companies including Amazon is how “at the time” means they can escape liability.

When an Uber driver brutally killed a little girl in a crosswalk, Uber tried to claim their driver “at the time” wasn’t their responsibility.

Attorneys for Uber said the ride-sharing company was not liable … because the driver was an independent contractor and had no reason to be actively engaged with the app at the time. […] “It’s their technology. They need to make it safe,” [family attorney] Dolan said in January, suggesting that a hands-free mode would bring the business into compliance with distracted-driving laws.

Uber paid the family of the dead girl an undisclosed sum a year later “to avoid a trial about its responsibility for drivers who serve its customers”.

Lyft ran into a similar situation recently when “rideshare rapists” around America operated “at the time” as private individuals who just happened to have Lyft signage in their car when they raped women. The women were intoxicated or otherwise unable to detect that drivers “at the time” were abusing intimate knowledge of gaps in a cloud service trust model (including weak background checks).

Imagine a bank telling you they have no responsibility for lost money in your account because “at the time” the person stealing from you was no longer their employee. It’s not quite so simple, otherwise companies would fire criminals in process and be able to instantly wash their hands without stopping actual harm.

Let’s not blindly walk into this thinking insiders, such as SRE, are off the hook when someone throws up pithy “at the time” arguments.

And now for the list

1) AWS engineer in 2019

The FBI has already arrested a suspect in the case: A former engineer at Amazon Web Services (AWS), Paige Thompson, after she boasted about the data theft on GitHub.

Remember how I said “at the time” arguments should not easily get people off the hook? The former engineer at AWS, who worked there around 2015-2016, used exploits known since 2012 on servers in 2019. “At the time” gets fuzzy here.

Let the timeline sink in, and then take note customers were making mistakes with Amazon for many years, and their staff would have front row seats to the vulnerabilities. Other companies treated the same situation very differently. “Little Man in My Head” put it like this:

The three biggest cloud providers are Amazon AWS, Microsoft Azure, and Google Cloud Platform (GCP). All three have very dangerous instance metadata endpoints, yet Azure and GCP applications seem to never get hit by this dangerous SSRF vulnerability. On the other hand, AWS applications continue to get hit over and over and over again. Does that tell you something?

Yes, it tells us that an inside engineer working for Amazon any time after 2012 would have seen a lot of vulnerabilities sitting ripe for the picking, unlike engineers at other cloud providers. Boasting about theft is what got the former engineer caught, which should make us wonder what about the engineers who never boasted?

2) Trend Micro engineer 2019

In early August 2019, Trend Micro became aware that some of our consumer customers running our home security solution had been receiving scam calls by criminals impersonating Trend Micro support personnel. The information that the criminals reportedly possessed in these scam calls led us to suspect a coordinated attack. Although we immediately launched a thorough investigation, it was not until the end of October 2019 that we were able to definitively conclude that it was an insider threat. A Trend Micro employee used fraudulent means to gain access to a customer support database that contained names, email addresses, Trend Micro support ticket numbers, and in some instances telephone numbers.

3) Google engineer in 2010

Google Site Reliability Engineer (SRE) David Barksdale was recently fired for stalking and spying on teenagers through various Google services…

And in case you were wondering how easily a Google SRE could get away with this kind of obvious bad stalking behavior over the last ten years, in 2018 the company said it was starting to fire people in management for crimes and nearly 50 were out the door:

…an increasingly hard line on inappropriate conduct by people in positions of authority: in the last two years, 48 people have been terminated for sexual harassment, including 13 who were senior managers and above.

Do you realize how huge the investigations team has to be to collect evidence let alone fire 48 people over two years for sexual harassment?

4) Uber employees in 2016

Uber’s lack of security regarding its customer data was resulting in Uber employees being able to track high profile politicians, celebrities, and even personal acquaintances of Uber employees, including ex-boyfriends/girlfriends, and ex-spouses…

5) Twitter engineers 2019

Ahmad Abouammo and Ali Alzabarah each worked for the company from 2013 to 2015. The complaint alleges that Alzabarah, a site reliability engineer, improperly accessed the data of more than 6,000 Twitter users. […] Even after leaving the company, Abouammo allegedly contacted friends at Twitter to facilitate Saudi government requests, such as for account verification and to shutter accounts that had violated the terms of service.

6) Yahoo engineer 2018

In pleading guilty, Ruiz, a former Yahoo software engineer, admitted to using his access through his work at the company to hack into about 6,000 Yahoo accounts.

Just for the record, Ruiz was hired into Okta (cloud identity provider) as their SRE for eight months before be pleaded guilty to massive identity-theft crimes at his former employer.

7) Facebook engineer 2018

Facebook fires engineer who allegedly used access to stalk women. The employee allegedly boasted he was a ‘professional stalker.’

9) Lyft employees 2018

…employees were able to use Lyft’s back-end software to “see pretty much everything including feedback, and yes, pick up and drop off coordinates.” Another anonymous employee posted on the workplace app Blind that access to clients’ private information was abused. While staffers warned one another that the data insights tool tracks all usage, there seemed to be little to no enforcement, giving employees free reign over it. They used that access to spy on exes, spouses and fellow Lyft passengers they found attractive. They even looked up celebrity phone numbers, with one employee boasting that he had Zuckerberg’s digits. One source admitted to looking up their significant other’s Lyft destinations: “It was addictive. People were definitely doing what I was.”

9) AirBnB employees 2015

Airbnb actually teaches classes in SQL to employees so everyone can learn to query the data warehouses it maintains, and it has also created a tool called Airpal to make it easier to design SQL queries and dispatch them to the Presto layer of the data warehouse. (This tool has also been open sourced.) Airpal was launched internally at Airbnb in the spring of 2014, and within the first year, over a third of all employees at the company had launched an SQL query against the data warehouse.

10) And last but definitely not least, in a story rarely told accurately, City of San Francisco engineer 2008

Childs made national headlines by refusing to hand over administrative control to the City of San Francisco’s FiberWAN network, [a cloud service] which he had spent years helping to create.

One of the most peculiar aspects of the San Francisco case was how the network was sending logs (and perhaps even more, given a tap on core infrastructure) to a cluster of encrypted linux servers, and only Terry Childs had keys. He installed those servers in a metal cabinet with wood reinforcements and padlocks outside the datacenter and behind his desk. Holes were drilled in the cabinet for cable runs back into the datacenter, where holes also were drilled to allow entry.

The details of his case are so bizarre I’m surprised nobody made a movie about it. Maybe if one had been made, we’d have garnered more attention about these insider risks to cloud than the book we published in 2012.

So who would be the equivalent today of Tony Curtis?


Updated December 2019 with yet another Facebook example

11) Journalists had to notify the massively wealthy company that insiders were taking bribes to easily circumvent security controls to intentionally harm users:

A Facebook employee was paid thousands of dollars in bribes by a shady affiliate marketer to reactivate ad accounts that had been banned due to policy violations, a BuzzFeed News investigation has found. A company spokesperson confirmed that an unnamed employee was fired after inquiries from BuzzFeed News sparked an internal investigation.

USAF Plans Around PsyOps Leaflet Dispersal Shortfall

Somehow I had missed some leaflet planning considerations over the past few years.

…shortfall on leaflet dispersal capability will jeopardize Air Force Central Command information operations,” said Earl Johnson, B-52 PDU-5/B project manager. The “Buff” can carry 16 PDU-5s under the wings, making it able to distribute 900,000 leaflets in a single sortie.

That’s a lot of paper.

Discussing leaflet drops the other night with a pilot set me straight on the latest methods and he emphasized some recent capability for computerized micro-target leaflet fall paths. It sounded too good to be true. I still imagine stuff floating everywhere randomly like snowflakes.

Then again he might have been pulling my leg, given how we were talking about psyops, or the person telling him.

Speaking of psyops, OpenAI is said to have leaked their monster into the world. We’re warned already it could unleash targeted text at scale even a B-52 couldn’t touch:

…extremist [use of OpenAI tools] to create ‘synthetic propaganda’ that would allow them to automatically generate long text promoting white supremacy or jihadist Islamis…

And perhaps you can see where things right now are headed with this information dissemination race?

U.S. seen as ‘exporter of white supremacist ideology,’ says counterterrorism official

Can B-52s leaflet America fast enough to convince domestic terrorists to stop generating their AI-based long texts promoting white supremacist ideology?