Category Archives: History

Don’t Be an AppleCard: Exposed for Using Sexist Algorithm

Wrecked ship Captain de Kam said “It’s just like losibng a beautiful woman”.
Photograph: Michael Prior

The creator of Ruby on Rails tweeted angrily at Apple November 7th that they were discriminating unfairly against his wife, and he wasn’t able to get a response:

By the next day, he had a response and he was even more unhappy. “THE ALGORITHM”, described similarly to Kafka’s 1915 novel “The Trial“, became the focus of his complaint:

She spoke to two Apple reps. Both very nice, courteous people representing an utterly broken and reprehensible system. The first person was like “I don’t know why, but I swear we’re not discriminating, IT’S JUST THE ALGORITHM”. I shit you not. “IT’S JUST THE ALGORITHM!”. […] So nobody understands THE ALGORITHM. Nobody has the power to examine or check THE ALGORITHM. Yet everyone we’ve talked to from both Apple and GS are SO SURE that THE ALGORITHM isn’t biased and discriminating in any way. That’s some grade-A management of cognitive dissonance.

And the following day he appeals to regulators for a transparency regulation:

It should be the law that credit assessments produce an accessible dossier detailing the inputs into the algorithm, provide a fair chance to correct faulty inputs, and explain plainly why difference apply. We need transparency and fairness. What do you think @ewarren?

Transparency is a reasonable request. Another reasonable request in the thread was evidence of diversity within the team that developed the AppleCard product. These solutions are neither hard nor hidden.

What algorithms are doing, time and again, is accelerating and spreading historic wrongs. The question fast is becoming whether centuries of social debt in forms of discrimination against women and minorities is what technology companies are prepared for when “THE ALGORITHM” exposes the political science of inequality and links it to them.

Woz, founder of Apple, correctly states that only the government can correct these imbalances. Companies are too powerful for any individual to keep the market functioning to any degree of fairness.

Take the German government’s “Datenethikkommission” report on regulating AI, for example, as it was just released.

And the women named in the original tweet also correctly states that her privileged status, achieving a correction for her own account, is no guarantee of a social system of fairness for anyone else.

I care about justice for all. It’s why, when the AppleCard manager told me she was aware of David’s tweets and that my credit limit would be raised to meet his, without any real explanation, I felt the weight and guilt of my ridiculous privilege. So many women (and men) have responded to David’s twitter thread with their own stories of credit injustices. This is not merely a story about sexism and credit algorithm blackboxes, but about how rich people nearly always get their way. Justice for another rich white woman is not justice at all.

Again these are not revolutionary concepts. We’re seeing the impact from a disconnect between history, social science of resource management, and the application of technology. Fixing technology means applying social science theory in the context of history. Transparency and diversity work only when applied in that manner.

In my recent presentation to auditors at the annual ISACA-SF conference, I conclude with a list and several examples of how AI auditing will perform most effectively.

One of the problems we’re going to run into with auditing Apple products for transparency will be (from denying our right-to-repair hardware to forcing “store” bought software) they have been long waging a war against any transparency in technology.

Apple’s subtle, anti-competitive practices don’t look terrible in isolation, but together they form a clear strategy.

The closed-minded Apple model of business is also dangerous as it directly inspires others to repeat the mistakes.

Honeywell, for example, now speaks of “taking over your building’s brains” by emulating how Apple shuts down freedom:

A good analogy I give to our customers is, what we used to do [with industrial technology] was like a Nokia phone. It was a phone. Supposed to talk. Or you can do text. That’s all our systems are. They’re supposed to do energy management. They do it. They’re supposed to protect against fire. They do it. Right? Now our systems are more like Apple. It’s a platform. You can load any app. It works. But you can also talk, and you can also text. But you can also listen to the music. Possibilities emerge based upon what you want.

That closing concept of possibilities can be a very dangerous prospect if “what you want” comes from a privileged position of power with no accountability. In other words do you want to live in a building run by a criminal brain?

When an African American showed up to rent an apartment owned by a young real-estate scion named Donald Trump and his family, the building superintendent did what he claimed he’d been told to do. He allegedly attached a separate sheet of paper to the application, marked with the letter “C.” “C” for “Colored.” According to the Department of Justice, that was the crude code that ensured the rental would be denied.

Somehow THE ALGORITHM in that case ended up in the White House. And let us not forget that building was given such a peculiar name by Americans trying to appease white supremacists and stop blacks from entering even as guests of the President.

…Mississippi senator suggesting that after the dinner [allowing a black man to attend] the Executive Mansion was “so saturated with the odour of the nigger that the rats have taken refuge in the stable”. […] Roosevelt’s staff went into damage control, first denying the dinner had taken place and later pretending it was actually a quick bite over lunch, at which no women were in attendance.

A recent commentary about fixing closed minds, closed markets, and bias within in the technology industry perhaps explained it best:

The burden to fix this is upon white people in the tech industry. It is incumbent on the white women in the “women in tech” movement to course correct, because people who occupy less than 1% of executive positions cannot be expected to change the direction of the ship. The white women involved need to recognize when their narrative is the dominant voice and dismantle it. It is incumbent on white women to recognize when they have a seat at the table (even if they are the only woman at the table) and use it to make change. And we need to stop praising one another—and of course, white men—for taking small steps towards a journey of “wokeness” and instead push one another to do more.

Those sailing the ship need to course correct it. We shouldn’t expect people outside the cockpit to drive necessary changes. The exception is when talking about the governance group that licenses ship captains and thus holds them accountable for acting like an AppleCard.

Is Stanford Internet Observatory (SIO) a Front Organization for Facebook?

A “Potemkin Village” is made from fake storefronts built to fraudulently impress a visiting czar and dignitaries. The “front organization” is torn down once its specific message/purpose ends.

Image Source: Weburbanist’s ‘Façades’ series by Zacharie Gaudrillot-Roy
Step one (PDF): Facebook sets up special pay-to-play access (competitive advantage) to user data and leaks this privileged (back) door to Russia.

(October 8, 2014 email in which Facebook engineer Alberto Tretti emails Archibong and Papamiltiadis notifying them that entities with Russian IP addresses have been using the Pinterest API access token to pull over 3 billion data points per day through the Ordered Friends API, a private API offered by Facebook to certain companies who made extravagant ads purchases to give them a competitive advantage against all other companies. Tretti sends the email because he is clearly concerned that Russian entities have somehow obtained Pinterest’s access token to obtain immense amounts of consumer data. Merely an hour later Tretti, after meeting with Facebook’s top security personnel, retracts his statement without explanation, calling it only a “series of unfortunate coincidences” without further explanation. It is highly unlikely that in only an hour Facebook engineers were able to determine definitively that Russia had not engaged in foul play, particularly in light of Tretti’s clear statement that 3 billion API calls were made per day from Pinterest and that most of these calls were made from Russian IP addresses when Pinterest does not maintain servers or offices in Russia)

Step two: Facebook CEO announces his company doesn’t care if information is inauthentic or even disinformation.

Most of the attention on Facebook and disinformation in the past week or so has focused on the platform’s decision not to fact-check political advertising, along with the choice of right-wing site Breitbart News as one of the “trusted sources” for Facebook’s News tab. But these two developments are just part of the much larger story about Facebook’s role in distributing disinformation of all kinds, an issue that is becoming more crucial as we get closer to the 2020 presidential election. And according to one recent study, the problem is getting worse instead of better, especially when it comes to news stories about issues related to the election. Avaaz, a site that specializes in raising public awareness about global public-policy issues, says its research shows fake news stories got 86 million views in the past three months, more than three times as many as during the previous three-month period.

Step three: Facebook announces it has used an academic institution led by former staff to measure authenticity and coordination of actions (not measure disinformation).

Working with the Stanford Internet Observatory (SIO) and the Daily Beast, Facebook determined that the shuttered accounts were coordinating to advance pro-Russian agendas through the use of fabricated profiles and accounts of real people from the countries where they operated, including local content providers. The sites were removed not because of the content itself, apparently, but because the accounts promoting the content were engaged in inauthentic and coordinated actions.

In other words you can tell a harmful lie. You just can’t start a union, even to tell a truth, because unions by definition would be inauthentic (representing others) and coordinated in their actions.

It’s ironic as well since this new SIO clearly was created by Facebook to engage in inauthentic and coordinated actions. Do as they say not as they do.

The Potemkin Village effect here is thus former staff of Facebook creating an academic front to look like they aren’t working for Facebook, while still working with and for Facebook… on a variation of the very thing that Facebook has said it would not be working on.

For example, hypothetically speaking:

If Facebook were a company in 1915 would they have said they don’t care about inauthentic information in “Birth of a Nation” that encouraged restarting the KKK?

Even to this day Americans are very confused whether the White House of Woodrow Wilson was coordinating the restart of the KKK, and they debate that instead of the obvious failure to block a film with intentionally harmful content intended to kill black people (e.g. huge rise in lynchings and 1919 Red Summer, 1921 Tulsa massacre, etc.).

Instead, based on this new SIO model, it seems Facebook of 1915 would partner with a University to announce they will target and block films of pro-KKK rallies on the basis of white sheets and burning crosses being inauthentic coordinated action.

It reads to me like a very strange us of API as privacy backdoors, as well as use of “academic” organizations as legal backdoors; both seem to mean false self-regulation, in an attempt to side-step dealing with the obvious external pressure to regulate harms from speech.

Facebook perhaps would have said in 1915 that KKK are fine if they call for genocide and the death of non-whites, as long as the KKK known to be pushing such toxic and inauthentic statements don’t put a hood on to conceal their face while they do it.

Easy to see some irony in how Facebook takes an inauthentic position, with their own staff strategically installed into an academic institution like Stanford, while telling everyone else they have to be authentic in their actions.

Also perhaps this is a good time to remember how a Stanford professor took large payments from tobacco companies to say cigarettes weren’t causing cancer.

[Board-certified otolaryngologist Bill Fees] said he was paid $100,000 to testify in a single case.


Updated November 12 to add latest conclusions of the SIO about Facebook data provided to them.

Considered as a whole, the data provided by Facebook — along with the larger online network of websites and accounts that these Pages are connected to — reveal a large, multifaceted operation set up with the aim of artificially boosting narratives favorable to the Russian state and disparaging Russia’s rivals. Over a period when Russia was engaged in a wide range of geopolitical and cultural conflicts, including Ukraine, MH17, Syria, the Skripal Affair, the Olympics ban, and NATO expansion, the GRU turned to active measures to try to make the narrative playing field more favorable. These active measures included social-media tactics that were repetitively deployed but seldom successful when executed by the GRU. When the tactics were successful, it was typically because they exploited mainstream media outlets; leveraged purportedly independent alternative media that acts, at best, as an uncritical recipient of contributed pieces; and used fake authors and fake grassroots amplifiers to articulate and distribute the state’s point of view. Given that many of these tactics are analogs of those used in Cold-War influence operations, it seems certain that they will continue to be refined and updated for the internet era, and are likely to be used to greater effect.

One thing you haven’t seen and probably will never see is the SIO saying Facebook is a threat, or that privately-held publishing/advertising companies are a danger to society (e.g. how tobacco companies or oil companies are a danger).

Searching in the Wild for What is Real

This new NY Books essay reads to me like prose and raises some important points about the desire to escape, and believing reality exists in places that we are not:

…when I look back at the series of wilderness travel articles I wrote for The New York Times a decade ago, what jumps out at me is the almost monomaniacal obsession with enacting Denevan’s myth by finding unpopulated places. Camped out in the Australian outback, I boasted that it was “the farthest I’d ever been from other human beings.” Along the “pristine void” of a remote river in the Yukon, I climbed ridges and scanned the horizon: “It was intoxicating,” I wrote, “to pick a point in the distance and wonder: Has any human ever stood there?”

Rereading those and other articles, I now began to reluctantly consider the possibility that my infatuation with the wilderness was, at its core, a poorly cloaked exercise in colonial nostalgia—the urbane Northern equivalent of dressing up as Stonewall Jackson at Civil War reenactments because of an ostensible interest in antique rifles.

As a historian I’d say he’s engaging in a poorly cloaked exercise is escapism, more like going to Disneyland than trying to reenact real events from the past (whether it be the white supremacist policies of Britain or America).

Just some food for thought after reading the ridiculously high percentage of fraud in today’s “wilderness” of software service providers.

Fake Identity Farms Generating Fraud on All Sides for Profits

Earlier this year researchers disclosed in a study that the lack of regulation has allowed BitCoin markets to be over 90% fraud.

Nearly 95% of all reported trading in bitcoin is artificially created by unregulated exchanges, a new study concludes, raising fresh doubts about the nascent market following a steep decline in prices over the past year.

Earlier analysis had pointed to robots programmed to manipulate at large and fast scale

Bitcoin prices were being manipulated in late 2013 by a pair of autonomous computer programs running on bitcoin exchange MtGox, according to an anonymously published report.

The programs, named Willy and Markus, allegedly pushed prices up to $1,000 before the bubble burst after MtGox’s collapse in late February.

The report’s author alleges that some of the trades were coming from inside the exchange itself. “In fact,” the report says, “there is a ton of evidence to suggest that all of these accounts were controlled by MtGox themselves.”

And here’s some brand new reporting on a different value system, social media fraud by someone who worked inside an operation:

The farm has both left- and right-wing troll accounts. That makes their smear and support campaigns more believable: instead of just taking one position for a client, it sends trolls to work both sides, blowing hot air into a discussion, generating conflict and traffic and thereby creating the impression that people actually care about things when they really don’t – including, for example, about the candidacy of a recently elected member of the Polish parliament.

I suppose we can say now the Ashley Madison dataset was no exception to widespread online fraud:

Over 20 million male customers had checked their Ashley Madison email boxes at least once. The number of females who checked their inboxes stands at 1,492. There have already been multiple class action lawsuits filed against Ashley Madison and its parent company, Avid Life Media, but these findings could send the figures skyrocketing. If true, it means that just 0.0073% of Ashley Madison’s users were actually women — and that changes the fundamental nature of the site.

People keep asking what will a future life with robots look like, when we’re obviously already living in it. It basically looks like a world where the late 1800s common phrase in America “there is a sucker born every day” continues to haunt the security industry…

The Great Conspiracy: A Complete History of the Famous Tally-sheet Cases, by Simeon Coy, 1889, p 222

“Sneaking” banks refers to a social engineering trick where one person creates a distraction while the other sneaks money out of the vault.

Note how even back in 1889 an author writes about banks and jewlers hacking themselves to become wise to how to stop hackers. Threats mostly were targeting people too weak to protect themselves individually (hinting towards a need for regulatory oversight).