England’s Use of Cypher in 16th Century

I’ve written here before about French use of encryption in the 16th Century, and prior art. A new history article makes brief mention of ancient secrecy methods found in England.

The spies had a few special tricks up their sleeves. “They practiced secret inks,” explains Alford. “Quite a lot of use of code and cypher, which to our eyes looks relatively unsophisticated, although it develops an increasing sophistication.”

Cyphers became particularly important during the infamous Babington Plot, when Walsingham’s agents decrypted letters to and from Mary Queen of Scots. This provided evidence that Mary was conspiring against Elizabeth, leading to Mary’s trial and execution.

The UK National Archives have an example of the letters used and the Tudor Times explains the level of sophistication at the time

By the 1580s, ciphers were extremely complex – they could incorporate substitute letters, Arabic numerals, nulls, letters with a dot before or after, substitute names for locations, and numbers, signs of the zodiac or days of the week for individuals.

If you think that sounds innovative, consider how French and English secrecy methods seem to have roots elsewhere:

Muhammad ibn Abbad al-Mu’tamid (المعتمد بن عباد), King of Seville from 1069-1092, used birds in poetry for secret correspondence.

US Army Considers Grey Hats for PSYOP Warriors

Leaflets are so basic, so black beret, it sounds like something higher up on the beret color chart may be coming to attract talent into Psychological Operations (PSYOP) as they modernize.

Nothing is decided yet, there’s still a chance someone could influence the decision, but rumors have it the psychological warfare troops will be represented by wearing a beret in color of white noise:

The idea is essentially still being floated at this point, but it could be a recruiting boon for the PSYOP career field, which is tasked with influencing the emotions and behaviors of people through products like leaflets, loudspeakers and, increasingly, social media.

“In a move to more closely link Army Special Operations Forces, the PSYOP Proponent at the U.S. Army John F. Kennedy Special Warfare Center and School is exploring the idea of a distinctive uniform item, like a grey beret, to those Soldiers who graduate the Psychological Operations Qualification Course,” Lt. Col. Loren Bymer, a USASOC spokesman, said in an emailed statement to Army Times.

Still seems a little fuzzy on the details, yet reporters also dropped some useful knowledge bombs in their story:

1) The new Army Special Operations Command strategy released just a month ago states everyone always will be trained in cyber warfare and weaponizing information

LOE 2 Readiness, OBJ 2.2 Preparation: Reality in readiness will be achieved using cyber and information warfare in all aspects of training.

2) Weaponizing information means returning to the influence operations in World War II, let alone World War I…I mean adapting to the modern cloud platform (Cambridge Analytica) war

“We need to move beyond our 20th century approach to messaging and start looking at influence as an integral aspect of modern irregular warfare,” Andrew Knaggs, the Pentagon’s deputy assistant secretary of defense for special operations and combating terrorism, said at a defense industry symposium in February. Army Special Operations Command appears to take seriously the role that influencing plays in great power competition.

Speaking of cloudy information and influence, an Army site describes how the Air Force in 2008 setup a data analysis function and referred to them as Grey Berets, or Special Operations Weather Team (SWOT):

As some of the most highly trained military personnel, the “grey beret” are a force to be reckoned with. Until SOWT gives the “all-clear” the mission doesn’t move forward.

The Air Force even offers hi-res photos of a grey beret as proof they are real.

Kessler AFB: “Team members collect atmospheric data, assist mission planning, generate accurate and mission-tailored target and route forecasts in support of global special operations, conduct special weather reconnaissance and train foreign national forces.” Click for original.

Meanwhile over at the Navy and Marines there’s much discussion about vulnerability to broad-based information attacks across their entire supply chain.

This might be a good time to remember October 12, 1961 (only nine months after taking office as the President), the day JFK visited Fort Bragg’s Special Warfare Center.

While Brigadier General (BG) William P. Yarborough, commander of the U.S. Army Special Warfare Center, waited at the pond, the presidential caravan drove down roads flanked on both sides by saluting SF soldiers, standing proudly in fatigues and wearing green berets.

“Late Thursday morning, 12 October 1961, BG Yarborough welcomed the 35th President, Secretary McNamara, GEN Decker, and the distinguished guests at the reviewing stand.”

General Yarborough very strategically wore the green beret as he greeted JFK and they spoke of Special Forces wanting them a long time (arguably since 1953 when ex-OSS Major Brucker started the idea).

A few days after the visit JFK famously wrote poetically to the General:

The challenge of this old but new form of operations is a real one…I am sure the Green Beret will be a mark of distinction in the trying times ahead.

Just one month later the green beret became official headgear of the Special Forces.

Ironically NYT Reveals Own Bias in Story About Risks of AI Bias

This is a serious problem.

Metz and Munro gaze together into their abyss of bias they practice

A month ago Munro realizes bias in AI is bad, as you can see in his tweet above. And suddenly Munro is the leading voice in a NYT story on it?

Cade Metz appears to be a white man at the NYT who reached out to another white man, Dr. Munro. They then discuss bias in AI for Munro’s new interest and future book.

Was there any point that either of them thought maybe someone who isn’t like them, someone who isn’t a white man and also who has been doing this a long time, could be the lead voice in their story about bias in AI?

Let’s dig in.

The transition in the story itself is so remarkably tone-deaf to itself, it’s hard to believe it is real.

BERT and its peers are more likely to associate men with computer programming, for example…. On a recent afternoon in San Francisco, while researching a book on artificial intelligence, the computer scientist Robert Munro fed 100 English words into BERT: “jewelry,” “baby,” “horses,” “house,” “money,” “action.” In 99 cases out of 100, BERT was more likely to associate the words with men rather than women. The word “mom” was the outlier. “This is the same historical inequity we have always seen,” said Dr. Munro, who has a Ph.D. in computational linguistics and previously oversaw natural language and translation technology at Amazon Web Services.

  1. Why does the author think we should be happy to go from “more likely to associate men with computer programming” straight to here’s a man to talk about it? It’s like the NYT writing “mispelungs are a problem in communication”. So, how about don’t do that thing you’re saying is bad? Or at the very least setup an example, like Munro could have deferred to a black woman and said “I’m new to this and confirming what’s been said, so let’s ask her”.
  2. There are many books already written about this by people of diverse backgrounds. Why talk to someone still in research phase, and why this white man? Massachusetts Institute of Technology researcher Joy Buolamwini is such an obvious resource here. Or Yeshimabeit Milner, founder and executive director of Data for Black Lives, or MacArthur “Genius” award recipient Jennifer Eberhardt who published “Biased“, or Margaret Hu writing about a Big Data Constitution, or Caroline Criado Perez who published “Invisible Women“…come on people.
  3. 100 English words is barely a talking point, so why is it here? Even I’ve done far more in my Big Data ethics classes over the past five years. We literally fed hundreds of words from dozens of languages into algorithms to break them. I’ll bet my students from diverse backgrounds would be the better sources to quote than this one white man feeding “horses, money, baby, action” into any algorithm new or old. Were the rest of the words on his list like “bro, polo, golf, football, beer, eggplant, testicles, patagonia…”? Perhaps we also should be asking why he thought to test whether horses, baby and jewelry would associate more with women than men? Does mom, which is so obviously not male, serve as an outlier more in terms of his own life choices?
  4. “This is the same historical inequity we have always seen..” is a meaningless history phrase. Why can’t jewelry be associated with men? Historical inequity seen where? By who? Over what period of time?
  5. Then I noticed…”previously oversaw natural language and translation technology at Amazon Web Services.” A quick check of LinkedIn revealed “Principal Product Manager at AWS Machine Learning, Sep 2016 – Jun 2017…I led product for Amazon Comprehend and Amazon Translate…the most senior Product Manager within AWS’s Machine Learning team”. Calling oneself the most senior product manager on a team usually means someone above was the overseer, not him. And even if we give benefit of doubt, he was last there in 2017 and only sat 10 months. It’s a stretch to hold that out as his priors. Why not speak to his recent work, lack of focus on this topic and the reason his bias story from just a month ago makes him so relevant?

None of this is to fault Munro entirely for answering the call of a journalist. Hey, I answer calls about ethics all the time too and I’m a white man.

His response, however, could have been to get the journalist oriented more towards leading his story with people who already have released their books (as in how I discuss “Weapons of Math Destruction”); help NYT represent topics of bias fairly even though “more likely to associate men with computer programming”. Seems like missed opportunity to avoid repeating known failures.

And if that isn’t enough, the article gets worse:

Researchers have long warned of bias in A.I. that learns from large amounts data, including the facial recognition systems that are used by police departments and other government agencies as well as popular internet services from tech giants like Google and Facebook. In 2015, for example, the Google Photos app was caught labeling African-Americans as “gorillas.” The services Dr. Munro scrutinized also showed bias against women and people of color.

Oh really, Google was caught? Well do tell, who caught it then? Was it some non-white person who will remain without credit?

Yes. I’ve spoken about this at conferences many times, citing those people and the original work (i.e. instead of asking a white man from Stanford to give me their opinion).

Don’t you want to know who discovered the bias in Google platform and when?
Click to enlarge.

What are the names of researchers who have long warned of bias? Were they women and people of color?

Yes. (See names above)

Yet the article returns to Munro for his Stanford-educated white man, 10 months at AWS and researching a new book, opinions again about women and people of color.

Wat.

We can do so much better.

Earlier and also in the NYT, a writer named Ruth Whippman gave some advice on what could be happening instead.

Use your platforms and your cultural capital to ask that men be the ones to do the self-improvement for once. Stand up for deference. Write the book that teaches men to sit back and listen and yield to others’ judgment. Code the app that shows them where to put the apologies in their emails. Teach them how to assess their own abilities realistically and modestly. Tell them to “lean out,” reflect and consider the needs of others rather than assertively restating their own. Sell the female standard as the norm.

If only Cade Metz had read it before publishing his own piece, he might have started by asking Munro whether — realistically and modestly speaking — there would be better candidates to feature in a story about bias, such as black and brown women already published and long-time active thought leaders in the same space. Maybe he did ask yet still decided to run Munro as lead in the story, and that would be even worse.

Drone-2-Drone Remote ID System Announced

Some are calling it a license plate system for drones to identify themselves, which becomes essential to safety. Some may recall that license plates were added to cars because they had a tendency to look all the same, cause havoc and disaster/death, and be able to drive away unidentified.

Incidentally (pun not intended) this is why license plates really are not needed for things like bicycles and motorcycles, which tend neither to get away nor be hard to identify uniquely.

The new drone-based system is leveraging past wireless protocol work and trying to get adoption before a European Union Aviation Safety Agency (EASA) July 2020 deadline for remote ID.

DJI’s system was built to conform to the forthcoming ASTM International standard for broadcast drone remote ID, developed over a period of 18 months by a broad group of industry and government stakeholders. The solution uses the Wi-Fi Aware protocol for mobile phones, which allows the phones to receive and use the Wi-Fi signals directly from the drones without having to complete a two-way connection. Because it does not need to connect to a Wi-Fi base station, a cellular network or any other external system, it works in rural areas with no telecom service. In DJI’s preliminary testing, the Wi-Fi Aware signals can be received from more than one kilometer away.

This is a small step towards dealing with the increasing illegal use of drones:

According to the National Interagency Fire Center, aerial firefighting efforts have been shut down at least nine times this year because of drone use, and at least 20 drone incursions have hindered firefighting capabilities nationwide from January through October. A report shared with The Times showed that of those 20 incursions, five were in California.

The next step is intercepting, demanding ID to check for authorization, and disabling upon wrong response just like it’s 1962 again.

Crypto Keys Exposed in TPM Chips

Time to patch (Intel released new firmware) and go on with life. Keys in secure hardware reportedly can be exposed in as little as a few minutes:

…timing leakage on Intel firmware-based TPM (fTPM) as well as in STMicroelectronics’ TPM chip. Both exhibit secret-dependent execution times during cryptographic signature generation. While the key should remain safely inside the TPM hardware, we show how this information allows an attacker to recover 256-bit private keys…

Yet More Shit AI: Startups Appeal for Stool Photos

In 2013 I was flying around speaking on big data security controls, and waste water analysis was one of my go-to examples of privacy and integrity risks.

The charts I showed sometimes were the most popular drugs detected in each city’s wastewater site (e.g. cocaine in Oregon) and I would joke that we could write a guide-book to the world based on what “logs” were found.

Fancy corporate slide for “log analysis” in wastewater treatment centers around the world

Scientists at that time claimed the ability to look at city-wide water treatment plants and backtrack outputs to city-block locality. In near future they said it would be possible to backtrack to house or building.

For example, you get a prescription for a drug and the insurance company buys your wastewater metadata because it shows you’re taking the generic drug version while putting brand label receipts in claim forms. Or someone looks at past 5 year analysis of drugs you’re on, based on sewer data science, to estimate your insurance rates.

This wasn’t entirely novel for me. As a kid I was fascinated by an archaeologist who specialized in digs of the Old West. Everything in a frontier town might be thrown down the hole (e.g. destroy evidence of “edge” behavior), so she would write narratives about real life based on the bottles, pistols, clothes, etc found in and around where an outhouse once stood.

I’m a little surprised, therefore, that instead of a water sensor for toilets the latest startups ask people to use their phones to take pictures of their stool and upload.

…Auggi, a gut-health startup that’s building an app for people to track gastrointestinal issues, and Seed Health, which works on applying microbes to human health and sells probiotics — are soliciting poop photos from anyone who wants to send them. The companies began collecting the photos online on Monday via a campaign cheekily called “Give a S–t”…

It’s a novel approach in that you aren’t pinned to the toilet in your home and can go outside and take pictures of poop on a sidewalk to upload.

This could be a game-changer given how many rideshare drivers are relieving themselves in cities like San Francisco.

Here’s the sort of chart we need right now, and not just because it looks like ride-share companies giving us the finger.

Uber’s army of 45,000 people suddenly driving from far-away places into a tiny 7 mile by 7 mile peninsula, with zero plans for their healthcare needs, infamously drove up rates of feces deposited all over public places.

…anecdotal complaints have gotten the attention of San Francisco City Attorney Dennis Herrera. Last week, his office released information for the first time about the number of Uber and Lyft drivers estimated to be working in the city: 45,000. To compare, 1,500 taxi medallions were given out [in 2016], according to the city’s Treasurer & Tax Collector. For perspective, Bruce Schaller, an urban transportation expert, said there are about 55,000 Uber, Lyft and other ride-sharing drivers in New York City, a metropolis of 8 million people, eight times the size of San Francisco.

I’ll just say it again, that a rise in human waste on the streets correlates pretty heavily to a rise of ride share drivers from far away needing a convenient place to relieve themselves (especially as many ended up sleeping in their cars).

In a conversation I had with a man in 2016 who had jumped out of his car to start peeing on the sidewalk in front of my house (despite surveillance cameras pointed right at him), he told me his plight:

  • Uber driver: I plan to quit as soon as I got my $700 bonus for 100 rides
  • Me: Because you just needed that quick money?
  • Uber driver: No, man there are no restrooms. I’m tired of taking a shit on sidewalks and peeing in newspaper boxes. It’s degrading

There definitely was a spike in 2016, which perhaps could have been correlated to gig economy workers seeing that $700 bonus and wandering into the city.

In some cases it appears that ride-share drivers would accumulate a giant bag during the day and then throw it onto the street.

Sightings of human feces on the sidewalks are now a regular occurrence; over the past 10 years, complaints about human waste have increased 400%. People now call the city 65 times a day to report poop, and there have been 14,597 calls in 2018 alone. Last year, software engineer Jenn Wong even created a poop map of San Francisco, showing the concentration of incidents across the city. New mayor London Breed said: “There is more feces on the sidewalks than I’ve ever seen growing up here.” In a revolting recent incident, a 20lb bag of fecal waste showed up on a street in the city’s Tenderloin district.

Do you know what also became a regular occurrence over the past 10 years? Ride share vehicles with drivers needing to poop and no time or place to go.

Many people mistakenly attribute the dirty truth about ride-share driver behavior to homelessness, despite curious facts like “there aren’t actually more homeless people than there have been in the past”.

People also ignore the fact that being homeless and living on the street doesn’t mean that people don’t care about their living environment. Homeless are known actually to clean and sweep, whereas a driver is far more likely to poop at whatever spot they can get away with and then scoot.

I’m not sure why it is so hard for people to admit that a massive rise in ride-sharing drivers and no public restrooms for them becomes an obvious contributor of waste problems.

In one case I even saw an Uber SUV stop in the middle of a street, a passenger with a dog jumped out and peed directly uphill from a small restaurant with sidewalk seating…the Uber crew then jumped back in and sped away as those eating watched helplessly while rivers of hot dog urine flowed under their dining tables.

That kind of scenario is common sense bad, no? Just look at ride-sharing booms in the 1800s for cities like London, which led to special huts being built for driver care and control.

By 1898 newspapers around the world reported “40 shelters in London, accommodating 3500 cabmen, and there was a fund, provided mostly by subscription, for the maintenance of them.”

Typical London Cabman’s Shelter after 1873

An app uploading photos for analysis, or even doing checks within the app itself, would both be a privacy threat to all the ride share drivers hoping to get away with their dirty business on streets, as well as give knowledge that would prove a city’s most vulnerable (homeless) populations aren’t always to blame.

It would also help analysis that often just assumes a public toilet is for people walking rather than drivers who could loiter anywhere in the city.

It’s a highly political topic, such that a “wasteland” interactive map with 2014 data turned into a crazy right-wing propaganda campaign to generate fear about San Francisco sanitation.

No mention ever is made in these political fights about unregulated ride-share drivers despite the obvious impact of at least 40,000 people driving into the city and around in circles all day every day generating pollution, noise, congestion and ultimately desperate for places to poop.

Waste analysis sensors could change all that and the real cost of Uber, Lyft etc could lead to sanitation fees (maintenance funds) for a modern-day Rideshare Shelter, which of course would have sensors on toilets.

However, already there’s a security issue mentioned in the plan for these startups. Their data collection requires people uploading photos to manually classify, which sounds to me like an integrity disaster. A recipe for shitty data, if you will.

[Jack Gilbert, a professor of pediatrics at the University of California San Diego School of Medicine and cofounder of the American Gut Project, a science project that solicits fecal samples from people] said that people are asked to rate their stool on the Bristol stool chart in pretty much every clinical trial he conducts, and automating this process would reduce bias and variation in data collection. “Human beings are just not very good at recording things,” he said.

Hopefully the startups will transition to the automated app and then traditional San Francisco residents who still walk on sidewalks, instead of calling a car to drive them three blocks, can use AI to efficiently report the prevalence of Uber poops.

Facebook App Caught Secretly Using Camera to Spy?

Joshua Maddux tweeted easily reproducible evidence that the Facebook app turns on your iPhone camera without notifying you and at times you weren’t expecting. TNW picked up the story:

By now, everyone should be well aware that any iOS app that has been granted access to your camera can secretly record you. Back in 2017, researcher Felix Krause spoke to TNW about the same issue.

At the time, the researcher noted one way to deal with this privacy concern is to revoke camera access (though that arguably doesn’t make for a smooth software experience). Another thing he suggested is covering up your camera — like former FBI director James Comey and Facebook‘s own emperor Mark Zuckerberg do.

Before saying that everyone should expect allowing “emperor Zuckerberg” access to your camera means he will spy on you, however, the author backs down and says it’s unclear whether Facebook secretly taking video is to be expected by iPhone users.

It remains unclear if this is expected behavior or simply a bug in the software for iOS (we all know what Facebook will say; spoiler: “Muh, duh, guh, it’s a bug. We sorry.”). For what it’s worth, we’ve been unable to reproduce the issue on Android (version 10, used on Google Pixel 4).

See my earlier post on neo-absolutist card indexes for a historic reference of what life was like for those who couldn’t quit Facebook of the 1800s.

One reason Facebook could repeatedly issue blanket denials “we don’t use your sensors for ads” could be that they shovel meta data into analytic engines, and sell that to affiliates. Those other companies pay for the meta data. Someone else advertises to you, through this tortured logic.

Would that enable Facebook to claim they don’t consider themselves to be using the data for advertising? We’d have to do a deeper line of auditing to find out for sure. Looking at transfer of data is not enough anymore, as analytics increasingly can be done onboard mobile devices including drones collecting massive amounts of sensor data.

This also means Facebook could claim they have no evidence of photos, videos, etc being transmitted to them, while transmitting rich meta data about users based on sensor capture.

See this example thread, which claims Spotify was the one who decided to target ads.

The most direct question is whether Facebook is able to use listening to sell data to companies like Spotify as profile/targeting meta information, without revealing to Spotify or anyone else that a microphone or camera actually was used?


Updated: An official explanation has been posted:

We recently discovered that version 244 of the Facebook iOS app would incorrectly launch in landscape mode. In fixing that issue last week in v246 (launched on November 8th) we inadvertently introduced a bug that caused the app to partially navigate to the camera screen adjacent to News Feed when users tapped on photos. We have seen no evidence of photos or videos being uploaded due to this bug. We’re submitting the fix for this to Apple today.

And again Facebook doesn’t say there was no evidence of photos or video generating data, storing data or sending data, especially meta data or notes about what the camera could see. It says more narrowly that the photos and videos themselves weren’t uploaded.

The Scent of Cyber

Police have embraced an emerging tactic that may be giving paws to cyber criminals.

…English springer spaniels who can detect hidden electronic devices. They follow the scent of a chemical coating used in manufacturing just as police dogs can sniff out blood, explosives and narcotics.

This dogmatic approach is not far removed from SIM sniffers used in prisons

The dogs can do this because cell phones have a smell. The psychologist Stanley Coren once wrote that he left a collection of cell phone parts in boxes for ten days and opened them to find “a sweet metallic smell that I might fantasize that a newly built robot would have, with perhaps a faint ozone-like overtone.”

Time to launch a cologne called “Ozone Overtone”?

Russian “Seabed Warfare” Ship Sails Near U.S. Cables

Recently I wrote about developments in airborne information warfare machines.

Also in the news lately is an infamous Russian “seabed warfare” ship that suddenly appeared in Caribbean waters.

Original artwork from Covert Shores, by H I Sutton. Click on image for more ship details.

She can deploy deep-diving submarines and has two different remote-operated vehicle (ROV) systems. And they can reach almost any undersea cable on the planet, even in deep water where conventional wisdom says that a cable should be safe.

In the same news story, the author speculates that ship is engaged right now in undersea cable attacks.

…search patterns are different from when she is near Internet cables. So we can infer that she us doing something different, and using different systems.

So has she been searching for something on this trip? The journey from her base in the Arctic to the Caribbean is approximately 5,800 miles. With her cruising speed of 14.5 knots it should have taken her about two weeks. Instead it has taken her over a month. So it does appear likely.

The MarineTraffic map shows the ship near the coast of Trinidad.

MarineTraffic map of Yantar

Maps of the Caribbean waters illustrate the relevance of any ship’s position to Internet cables and seabed warfare.

TeleGeography Submarine Cable Map 2019

A Russian ship on the northwest coast of Trinidad means it’s either inspecting or even tapping into the new DeepBlue cable, listed as going online 2020. Trinidad is in the lower right corner of the above map. Here’s a zoomed in look at the area to compare with the ship position map above:

And the DeepBlue cable specs give a pretty good idea of why a Russian seabed warfare ship would be hovering about in those specific waters…

Spanning approximately 12,000 km and initially landing in 14 markets, the Deep Blue Cable will meet an urgent demand for advanced telecom services across the Caribbean. This resilient state-of-the-art cable has up to 8 fibre pairs with an initial capacity of 6Tbps and ultimate capacity of approximately 20Tbps per fibre pair. It is designed to be fully looped maximizing system resiliency. With more than 40 planned landings, Deep Blue Cable will bring 28 island nations closer to each other and better connected to the world.

In only somewhat related news, the U.S. has been funding a scientific mission with the latest undersea discovery robots to find missing WWII submarines.

The USS Grayback was discovered more than 1,400 feet under water about 50 miles south of Okinawa, Japan, in June by Tim Taylor and his “Lost 52 Project” team, which announced the finding Sunday.

Their announcements are public and thus show how clearly technology today can map the seabed.

Announcing the discovery of the USS Grayback on June 5th, 2019 by Tim Taylor and his “Lost 52 Project” team.

Don’t Be an AppleCard: Exposed for Using Sexist Algorithm

Wrecked ship Captain de Kam said “It’s just like losibng a beautiful woman”.
Photograph: Michael Prior

The creator of Ruby on Rails tweeted angrily at Apple November 7th that they were discriminating unfairly against his wife, and he wasn’t able to get a response:

By the next day, he had a response and he was even more unhappy. “THE ALGORITHM”, described similarly to Kafka’s 1915 novel “The Trial“, became the focus of his complaint:

She spoke to two Apple reps. Both very nice, courteous people representing an utterly broken and reprehensible system. The first person was like “I don’t know why, but I swear we’re not discriminating, IT’S JUST THE ALGORITHM”. I shit you not. “IT’S JUST THE ALGORITHM!”. […] So nobody understands THE ALGORITHM. Nobody has the power to examine or check THE ALGORITHM. Yet everyone we’ve talked to from both Apple and GS are SO SURE that THE ALGORITHM isn’t biased and discriminating in any way. That’s some grade-A management of cognitive dissonance.

And the following day he appeals to regulators for a transparency regulation:

It should be the law that credit assessments produce an accessible dossier detailing the inputs into the algorithm, provide a fair chance to correct faulty inputs, and explain plainly why difference apply. We need transparency and fairness. What do you think @ewarren?

Transparency is a reasonable request. Another reasonable request in the thread was evidence of diversity within the team that developed the AppleCard product. These solutions are neither hard nor hidden.

What algorithms are doing, time and again, is accelerating and spreading historic wrongs. The question fast is becoming whether centuries of social debt in forms of discrimination against women and minorities is what technology companies are prepared for when “THE ALGORITHM” exposes the political science of inequality and links it to them.

Woz, founder of Apple, correctly states that only the government can correct these imbalances. Companies are too powerful for any individual to keep the market functioning to any degree of fairness.

Take the German government’s “Datenethikkommission” report on regulating AI, for example, as it was just released.

And the women named in the original tweet also correctly states that her privileged status, achieving a correction for her own account, is no guarantee of a social system of fairness for anyone else.

I care about justice for all. It’s why, when the AppleCard manager told me she was aware of David’s tweets and that my credit limit would be raised to meet his, without any real explanation, I felt the weight and guilt of my ridiculous privilege. So many women (and men) have responded to David’s twitter thread with their own stories of credit injustices. This is not merely a story about sexism and credit algorithm blackboxes, but about how rich people nearly always get their way. Justice for another rich white woman is not justice at all.

Again these are not revolutionary concepts. We’re seeing the impact from a disconnect between history, social science of resource management, and the application of technology. Fixing technology means applying social science theory in the context of history. Transparency and diversity work only when applied in that manner.

In my recent presentation to auditors at the annual ISACA-SF conference, I conclude with a list and several examples of how AI auditing will perform most effectively.

One of the problems we’re going to run into with auditing Apple products for transparency will be (from denying our right-to-repair hardware to forcing “store” bought software) they have been long waging a war against any transparency in technology.

Apple’s subtle, anti-competitive practices don’t look terrible in isolation, but together they form a clear strategy.

The closed-minded Apple model of business is also dangerous as it directly inspires others to repeat the mistakes.

Honeywell, for example, now speaks of “taking over your building’s brains” by emulating how Apple shuts down freedom:

A good analogy I give to our customers is, what we used to do [with industrial technology] was like a Nokia phone. It was a phone. Supposed to talk. Or you can do text. That’s all our systems are. They’re supposed to do energy management. They do it. They’re supposed to protect against fire. They do it. Right? Now our systems are more like Apple. It’s a platform. You can load any app. It works. But you can also talk, and you can also text. But you can also listen to the music. Possibilities emerge based upon what you want.

That closing concept of possibilities can be a very dangerous prospect if “what you want” comes from a privileged position of power with no accountability. In other words do you want to live in a building run by a criminal brain?

When an African American showed up to rent an apartment owned by a young real-estate scion named Donald Trump and his family, the building superintendent did what he claimed he’d been told to do. He allegedly attached a separate sheet of paper to the application, marked with the letter “C.” “C” for “Colored.” According to the Department of Justice, that was the crude code that ensured the rental would be denied.

Somehow THE ALGORITHM in that case ended up in the White House. And let us not forget that building was given such a peculiar name by Americans trying to appease white supremacists and stop blacks from entering even as guests of the President.

…Mississippi senator suggesting that after the dinner [allowing a black man to attend] the Executive Mansion was “so saturated with the odour of the nigger that the rats have taken refuge in the stable”. […] Roosevelt’s staff went into damage control, first denying the dinner had taken place and later pretending it was actually a quick bite over lunch, at which no women were in attendance.

A recent commentary about fixing closed minds, closed markets, and bias within in the technology industry perhaps explained it best:

The burden to fix this is upon white people in the tech industry. It is incumbent on the white women in the “women in tech” movement to course correct, because people who occupy less than 1% of executive positions cannot be expected to change the direction of the ship. The white women involved need to recognize when their narrative is the dominant voice and dismantle it. It is incumbent on white women to recognize when they have a seat at the table (even if they are the only woman at the table) and use it to make change. And we need to stop praising one another—and of course, white men—for taking small steps towards a journey of “wokeness” and instead push one another to do more.

Those sailing the ship need to course correct it. We shouldn’t expect people outside the cockpit to drive necessary changes. The exception is when talking about the governance group that licenses ship captains and thus holds them accountable for acting like an AppleCard.

the poetry of information security