Category Archives: History

Yet More Shit AI: Startups Appeal for Stool Photos

In 2013 I was flying around speaking on big data security controls, and waste water analysis was one of my go-to examples of privacy and integrity risks.

The charts I showed sometimes were the most popular drugs detected in each city’s wastewater site (e.g. cocaine in Oregon) and I would joke that we could write a guide-book to the world based on what “logs” were found.

Fancy corporate slide for “log analysis” in wastewater treatment centers around the world

Scientists at that time claimed the ability to look at city-wide water treatment plants and backtrack outputs to city-block locality. In near future they said it would be possible to backtrack to house or building.

For example, you get a prescription for a drug and the insurance company buys your wastewater metadata because it shows you’re taking the generic drug version while putting brand label receipts in claim forms. Or someone looks at past 5 year analysis of drugs you’re on, based on sewer data science, to estimate your insurance rates.

This wasn’t entirely novel for me. As a kid I was fascinated by an archaeologist who specialized in digs of the Old West. Everything in a frontier town might be thrown down the hole (e.g. destroy evidence of “edge” behavior), so she would write narratives about real life based on the bottles, pistols, clothes, etc found in and around where an outhouse once stood.

I’m a little surprised, therefore, that instead of a water sensor for toilets the latest startups ask people to use their phones to take pictures of their stool and upload.

…Auggi, a gut-health startup that’s building an app for people to track gastrointestinal issues, and Seed Health, which works on applying microbes to human health and sells probiotics — are soliciting poop photos from anyone who wants to send them. The companies began collecting the photos online on Monday via a campaign cheekily called “Give a S–t”…

It’s a novel approach in that you aren’t pinned to the toilet in your home and can go outside and take pictures of poop on a sidewalk to upload.

This could be a game-changer given how many rideshare drivers are relieving themselves in cities like San Francisco.

Here’s the sort of chart we need right now, and not just because it looks like ride-share companies giving us the finger.

Uber’s army of 45,000 people suddenly driving from far-away places into a tiny 7 mile by 7 mile peninsula, with zero plans for their healthcare needs, infamously drove up rates of feces deposited all over public places.

…anecdotal complaints have gotten the attention of San Francisco City Attorney Dennis Herrera. Last week, his office released information for the first time about the number of Uber and Lyft drivers estimated to be working in the city: 45,000. To compare, 1,500 taxi medallions were given out [in 2016], according to the city’s Treasurer & Tax Collector. For perspective, Bruce Schaller, an urban transportation expert, said there are about 55,000 Uber, Lyft and other ride-sharing drivers in New York City, a metropolis of 8 million people, eight times the size of San Francisco.

I’ll just say it again, that a rise in human waste on the streets correlates pretty heavily to a rise of ride share drivers from far away needing a convenient place to relieve themselves (especially as many ended up sleeping in their cars).

In a conversation I had with a man in 2016 who had jumped out of his car to start peeing on the sidewalk in front of my house (despite surveillance cameras pointed right at him), he told me his plight:

  • Uber driver: I plan to quit as soon as I got my $700 bonus for 100 rides
  • Me: Because you just needed that quick money?
  • Uber driver: No, man there are no restrooms. I’m tired of taking a shit on sidewalks and peeing in newspaper boxes. It’s degrading

There definitely was a spike in 2016, which perhaps could have been correlated to gig economy workers seeing that $700 bonus and wandering into the city.

In some cases it appears that ride-share drivers would accumulate a giant bag during the day and then throw it onto the street.

Sightings of human feces on the sidewalks are now a regular occurrence; over the past 10 years, complaints about human waste have increased 400%. People now call the city 65 times a day to report poop, and there have been 14,597 calls in 2018 alone. Last year, software engineer Jenn Wong even created a poop map of San Francisco, showing the concentration of incidents across the city. New mayor London Breed said: “There is more feces on the sidewalks than I’ve ever seen growing up here.” In a revolting recent incident, a 20lb bag of fecal waste showed up on a street in the city’s Tenderloin district.

Do you know what also became a regular occurrence over the past 10 years? Ride share vehicles with drivers needing to poop and no time or place to go.

Many people mistakenly attribute the dirty truth about ride-share driver behavior to homelessness, despite curious facts like “there aren’t actually more homeless people than there have been in the past”.

People also ignore the fact that being homeless and living on the street doesn’t mean that people don’t care about their living environment. Homeless are known actually to clean and sweep, whereas a driver is far more likely to poop at whatever spot they can get away with and then scoot.

I’m not sure why it is so hard for people to admit that a massive rise in ride-sharing drivers and no public restrooms for them becomes an obvious contributor of waste problems.

In one case I even saw an Uber SUV stop in the middle of a street, a passenger with a dog jumped out and peed directly uphill from a small restaurant with sidewalk seating…the Uber crew then jumped back in and sped away as those eating watched helplessly while rivers of hot dog urine flowed under their dining tables.

That kind of scenario is common sense bad, no? Just look at ride-sharing booms in the 1800s for cities like London, which led to special huts being built for driver care and control.

By 1898 newspapers around the world reported “40 shelters in London, accommodating 3500 cabmen, and there was a fund, provided mostly by subscription, for the maintenance of them.”

Typical London Cabman’s Shelter after 1873

An app uploading photos for analysis, or even doing checks within the app itself, would both be a privacy threat to all the ride share drivers hoping to get away with their dirty business on streets, as well as give knowledge that would prove a city’s most vulnerable (homeless) populations aren’t always to blame.

It would also help analysis that often just assumes a public toilet is for people walking rather than drivers who could loiter anywhere in the city.

It’s a highly political topic, such that a “wasteland” interactive map with 2014 data turned into a crazy right-wing propaganda campaign to generate fear about San Francisco sanitation.

No mention ever is made in these political fights about unregulated ride-share drivers despite the obvious impact of at least 40,000 people driving into the city and around in circles all day every day generating pollution, noise, congestion and ultimately desperate for places to poop.

Waste analysis sensors could change all that and the real cost of Uber, Lyft etc could lead to sanitation fees (maintenance funds) for a modern-day Rideshare Shelter, which of course would have sensors on toilets.

However, already there’s a security issue mentioned in the plan for these startups. Their data collection requires people uploading photos to manually classify, which sounds to me like an integrity disaster. A recipe for shitty data, if you will.

[Jack Gilbert, a professor of pediatrics at the University of California San Diego School of Medicine and cofounder of the American Gut Project, a science project that solicits fecal samples from people] said that people are asked to rate their stool on the Bristol stool chart in pretty much every clinical trial he conducts, and automating this process would reduce bias and variation in data collection. “Human beings are just not very good at recording things,” he said.

Hopefully the startups will transition to the automated app and then traditional San Francisco residents who still walk on sidewalks, instead of calling a car to drive them three blocks, can use AI to efficiently report the prevalence of Uber poops.

Russian “Seabed Warfare” Ship Sails Near U.S. Cables

Recently I wrote about developments in airborne information warfare machines.

Also in the news lately is an infamous Russian “seabed warfare” ship that suddenly appeared in Caribbean waters.

Original artwork from Covert Shores, by H I Sutton. Click on image for more ship details.

She can deploy deep-diving submarines and has two different remote-operated vehicle (ROV) systems. And they can reach almost any undersea cable on the planet, even in deep water where conventional wisdom says that a cable should be safe.

In the same news story, the author speculates that ship is engaged right now in undersea cable attacks.

…search patterns are different from when she is near Internet cables. So we can infer that she us doing something different, and using different systems.

So has she been searching for something on this trip? The journey from her base in the Arctic to the Caribbean is approximately 5,800 miles. With her cruising speed of 14.5 knots it should have taken her about two weeks. Instead it has taken her over a month. So it does appear likely.

The MarineTraffic map shows the ship near the coast of Trinidad.

MarineTraffic map of Yantar

Maps of the Caribbean waters illustrate the relevance of any ship’s position to Internet cables and seabed warfare.

TeleGeography Submarine Cable Map 2019

A Russian ship on the northwest coast of Trinidad means it’s either inspecting or even tapping into the new DeepBlue cable, listed as going online 2020. Trinidad is in the lower right corner of the above map. Here’s a zoomed in look at the area to compare with the ship position map above:

And the DeepBlue cable specs give a pretty good idea of why a Russian seabed warfare ship would be hovering about in those specific waters…

Spanning approximately 12,000 km and initially landing in 14 markets, the Deep Blue Cable will meet an urgent demand for advanced telecom services across the Caribbean. This resilient state-of-the-art cable has up to 8 fibre pairs with an initial capacity of 6Tbps and ultimate capacity of approximately 20Tbps per fibre pair. It is designed to be fully looped maximizing system resiliency. With more than 40 planned landings, Deep Blue Cable will bring 28 island nations closer to each other and better connected to the world.

In only somewhat related news, the U.S. has been funding a scientific mission with the latest undersea discovery robots to find missing WWII submarines.

The USS Grayback was discovered more than 1,400 feet under water about 50 miles south of Okinawa, Japan, in June by Tim Taylor and his “Lost 52 Project” team, which announced the finding Sunday.

Announcing the discovery of the USS Grayback on June 5th, 2019 by Tim Taylor and his “Lost 52 Project” team.

Their announcements are public and thus show how clearly technology today can map the seabed.

It is a far cry from the Cold War methods, as illustrated in this chart of cable faults since 1959 by cause (in a report from UK think tank Policy Exchange):


The 21% fishing breaks really should have been split out more, given how the same Policy Exchange report reveals Russia “accidentally” cut cables via unmarked fishing trawlers that would hover about.

To put it another way, while nobody could positively catch these fishing boats cutting transatlantic cables, the book “Incidents at Sea” explains how breaks jumped 4X whenever the Russians would drag tackle anywhere near a cable.

In just four days of February 1959, a series of twelve breaks in five American cables happened off the coast of Newfoundland, with only the Russian Novorossiysk trawler nearby.

As the caption of the above historic press photo explains, the US Navy (USS Roy O Hale) intercepted the trawler boarded her and searched for evidence of intent to break cables.

While broken cable was found on deck, the crew claimed they found cutting it the best option to free their nets from being tangled.

Nothing conclusive was found either way, so the case remained open as Russia complained about unfair detention of its citizens and the US complained about an 1884 Convention for the Protection of Submarine Telegraph Cables.


Update February 11, 2020: “New Pentagon Map Shows Huge Scale Of Worrisome Russian and Chinese Naval Operations

Though the map does not say what time period it covers and or what types of naval vessels were necessarily present in specific locations and when, it does confirm that there has been notable Russian naval activity off the coast of the southeastern United States, as well as in the North Atlantic Ocean and Caribbean, in recent years.

This new map confirms much of what has been talked about for years, although it also reveals a high amount of Chinese naval activity off the coast of Mozambique.

US DoD map showing Russian and Chinese naval activity, as well as the location of major undersea cables.

I don’t think I’ve ever seen mention of China’s heavy activity in southern African waters. The opposite, actually, as India and Mozambique recently made very public that they signed an agreement to apply pressure against Chinese ship movements in that region.

Ahead of undertaking a three-day visit to the southern African country of Mozambique, Indian Defence Minister Rajnath Singh on Friday said that the two countries will sign agreements in the fields of “exclusive economic zone surveillance, sharing of white shipping information and hydrography”.

A Chinese government promotional video for their 25th Fleet visiting Madagascar, however, offers the explanation that since “December 2008, authorized by the United Nations, the Chinese navy has been sending task forces to the Gulf of Aden and Somali waters for escort missions” before touring the coastline.

Apparently 2012 was the last time a Chinese fleet (the 10th) was in Mozambique, so that may be a clue to the age of the newly released DoD map.

Don’t Be an AppleCard: Exposed for Using Sexist Algorithm

Wrecked ship Captain de Kam said “It’s just like losibng a beautiful woman”.
Photograph: Michael Prior

The creator of Ruby on Rails tweeted angrily at Apple November 7th that they were discriminating unfairly against his wife, and he wasn’t able to get a response:

By the next day, he had a response and he was even more unhappy. “THE ALGORITHM”, described similarly to Kafka’s 1915 novel “The Trial“, became the focus of his complaint:

She spoke to two Apple reps. Both very nice, courteous people representing an utterly broken and reprehensible system. The first person was like “I don’t know why, but I swear we’re not discriminating, IT’S JUST THE ALGORITHM”. I shit you not. “IT’S JUST THE ALGORITHM!”. […] So nobody understands THE ALGORITHM. Nobody has the power to examine or check THE ALGORITHM. Yet everyone we’ve talked to from both Apple and GS are SO SURE that THE ALGORITHM isn’t biased and discriminating in any way. That’s some grade-A management of cognitive dissonance.

And the following day he appeals to regulators for a transparency regulation:

It should be the law that credit assessments produce an accessible dossier detailing the inputs into the algorithm, provide a fair chance to correct faulty inputs, and explain plainly why difference apply. We need transparency and fairness. What do you think @ewarren?

Transparency is a reasonable request. Another reasonable request in the thread was evidence of diversity within the team that developed the AppleCard product. These solutions are neither hard nor hidden.

What algorithms are doing, time and again, is accelerating and spreading historic wrongs. The question fast is becoming whether centuries of social debt in forms of discrimination against women and minorities is what technology companies are prepared for when “THE ALGORITHM” exposes the political science of inequality and links it to them.

Woz, founder of Apple, correctly states that only the government can correct these imbalances. Companies are too powerful for any individual to keep the market functioning to any degree of fairness.

Take the German government’s “Datenethikkommission” report on regulating AI, for example, as it was just released.

And the women named in the original tweet also correctly states that her privileged status, achieving a correction for her own account, is no guarantee of a social system of fairness for anyone else.

I care about justice for all. It’s why, when the AppleCard manager told me she was aware of David’s tweets and that my credit limit would be raised to meet his, without any real explanation, I felt the weight and guilt of my ridiculous privilege. So many women (and men) have responded to David’s twitter thread with their own stories of credit injustices. This is not merely a story about sexism and credit algorithm blackboxes, but about how rich people nearly always get their way. Justice for another rich white woman is not justice at all.

Again these are not revolutionary concepts. We’re seeing the impact from a disconnect between history, social science of resource management, and the application of technology. Fixing technology means applying social science theory in the context of history. Transparency and diversity work only when applied in that manner.

In my recent presentation to auditors at the annual ISACA-SF conference, I conclude with a list and several examples of how AI auditing will perform most effectively.

One of the problems we’re going to run into with auditing Apple products for transparency will be (from denying our right-to-repair hardware to forcing “store” bought software) they have been long waging a war against any transparency in technology.

Apple’s subtle, anti-competitive practices don’t look terrible in isolation, but together they form a clear strategy.

The closed-minded Apple model of business is also dangerous as it directly inspires others to repeat the mistakes.

Honeywell, for example, now speaks of “taking over your building’s brains” by emulating how Apple shuts down freedom:

A good analogy I give to our customers is, what we used to do [with industrial technology] was like a Nokia phone. It was a phone. Supposed to talk. Or you can do text. That’s all our systems are. They’re supposed to do energy management. They do it. They’re supposed to protect against fire. They do it. Right? Now our systems are more like Apple. It’s a platform. You can load any app. It works. But you can also talk, and you can also text. But you can also listen to the music. Possibilities emerge based upon what you want.

That closing concept of possibilities can be a very dangerous prospect if “what you want” comes from a privileged position of power with no accountability. In other words do you want to live in a building run by a criminal brain?

When an African American showed up to rent an apartment owned by a young real-estate scion named Donald Trump and his family, the building superintendent did what he claimed he’d been told to do. He allegedly attached a separate sheet of paper to the application, marked with the letter “C.” “C” for “Colored.” According to the Department of Justice, that was the crude code that ensured the rental would be denied.

Somehow THE ALGORITHM in that case ended up in the White House. And let us not forget that building was given such a peculiar name by Americans trying to appease white supremacists and stop blacks from entering even as guests of the President.

…Mississippi senator suggesting that after the dinner [allowing a black man to attend] the Executive Mansion was “so saturated with the odour of the nigger that the rats have taken refuge in the stable”. […] Roosevelt’s staff went into damage control, first denying the dinner had taken place and later pretending it was actually a quick bite over lunch, at which no women were in attendance.

A recent commentary about fixing closed minds, closed markets, and bias within in the technology industry perhaps explained it best:

The burden to fix this is upon white people in the tech industry. It is incumbent on the white women in the “women in tech” movement to course correct, because people who occupy less than 1% of executive positions cannot be expected to change the direction of the ship. The white women involved need to recognize when their narrative is the dominant voice and dismantle it. It is incumbent on white women to recognize when they have a seat at the table (even if they are the only woman at the table) and use it to make change. And we need to stop praising one another—and of course, white men—for taking small steps towards a journey of “wokeness” and instead push one another to do more.

Those sailing the ship need to course correct it. We shouldn’t expect people outside the cockpit to drive necessary changes. The exception is when talking about the governance group that licenses ship captains and thus holds them accountable for acting like an AppleCard.

Is Stanford Internet Observatory (SIO) a Front Organization for Facebook?

A “Potemkin Village” is made from fake storefronts built to fraudulently impress a visiting czar and dignitaries. The “front organization” is torn down once its specific message/purpose ends.

Image Source: Weburbanist’s ‘Façades’ series by Zacharie Gaudrillot-Roy
Step one (PDF): Facebook sets up special pay-to-play access (competitive advantage) to user data and leaks this privileged (back) door to Russia.

(October 8, 2014 email in which Facebook engineer Alberto Tretti emails Archibong and Papamiltiadis notifying them that entities with Russian IP addresses have been using the Pinterest API access token to pull over 3 billion data points per day through the Ordered Friends API, a private API offered by Facebook to certain companies who made extravagant ads purchases to give them a competitive advantage against all other companies. Tretti sends the email because he is clearly concerned that Russian entities have somehow obtained Pinterest’s access token to obtain immense amounts of consumer data. Merely an hour later Tretti, after meeting with Facebook’s top security personnel, retracts his statement without explanation, calling it only a “series of unfortunate coincidences” without further explanation. It is highly unlikely that in only an hour Facebook engineers were able to determine definitively that Russia had not engaged in foul play, particularly in light of Tretti’s clear statement that 3 billion API calls were made per day from Pinterest and that most of these calls were made from Russian IP addresses when Pinterest does not maintain servers or offices in Russia)

Step two: Facebook CEO announces his company doesn’t care if information is inauthentic or even disinformation.

Most of the attention on Facebook and disinformation in the past week or so has focused on the platform’s decision not to fact-check political advertising, along with the choice of right-wing site Breitbart News as one of the “trusted sources” for Facebook’s News tab. But these two developments are just part of the much larger story about Facebook’s role in distributing disinformation of all kinds, an issue that is becoming more crucial as we get closer to the 2020 presidential election. And according to one recent study, the problem is getting worse instead of better, especially when it comes to news stories about issues related to the election. Avaaz, a site that specializes in raising public awareness about global public-policy issues, says its research shows fake news stories got 86 million views in the past three months, more than three times as many as during the previous three-month period.

Step three: Facebook announces it has used an academic institution led by former staff to measure authenticity and coordination of actions (not measure disinformation).

Working with the Stanford Internet Observatory (SIO) and the Daily Beast, Facebook determined that the shuttered accounts were coordinating to advance pro-Russian agendas through the use of fabricated profiles and accounts of real people from the countries where they operated, including local content providers. The sites were removed not because of the content itself, apparently, but because the accounts promoting the content were engaged in inauthentic and coordinated actions.

In other words you can tell a harmful lie. You just can’t start a union, even to tell a truth, because unions by definition would be inauthentic (representing others) and coordinated in their actions.

It’s ironic as well since this new SIO clearly was created by Facebook to engage in inauthentic and coordinated actions. Do as they say not as they do.

The Potemkin Village effect here is thus former staff of Facebook creating an academic front to look like they aren’t working for Facebook, while still working with and for Facebook… on a variation of the very thing that Facebook has said it would not be working on.

For example, hypothetically speaking:

If Facebook were a company in 1915 would they have said they don’t care about inauthentic information in “Birth of a Nation” that encouraged restarting the KKK?

Even to this day Americans are very confused whether the White House of Woodrow Wilson was coordinating the restart of the KKK, and they debate that instead of the obvious failure to block a film with intentionally harmful content intended to kill black people (e.g. huge rise in lynchings and 1919 Red Summer, 1921 Tulsa massacre, etc.).

Instead, based on this new SIO model, it seems Facebook of 1915 would partner with a University to announce they will target and block films of pro-KKK rallies on the basis of white sheets and burning crosses being inauthentic coordinated action.

It reads to me like a very strange us of API as privacy backdoors, as well as use of “academic” organizations as legal backdoors; both seem to mean false self-regulation, in an attempt to side-step dealing with the obvious external pressure to regulate harms from speech.

Facebook perhaps would have said in 1915 that KKK are fine if they call for genocide and the death of non-whites, as long as the KKK known to be pushing such toxic and inauthentic statements don’t put a hood on to conceal their face while they do it.

Easy to see some irony in how Facebook takes an inauthentic position, with their own staff strategically installed into an academic institution like Stanford, while telling everyone else they have to be authentic in their actions.

Also perhaps this is a good time to remember how a Stanford professor took large payments from tobacco companies to say cigarettes weren’t causing cancer.

[Board-certified otolaryngologist Bill Fees] said he was paid $100,000 to testify in a single case.


Updated November 12 to add latest conclusions of the SIO about Facebook data provided to them.

Considered as a whole, the data provided by Facebook — along with the larger online network of websites and accounts that these Pages are connected to — reveal a large, multifaceted operation set up with the aim of artificially boosting narratives favorable to the Russian state and disparaging Russia’s rivals. Over a period when Russia was engaged in a wide range of geopolitical and cultural conflicts, including Ukraine, MH17, Syria, the Skripal Affair, the Olympics ban, and NATO expansion, the GRU turned to active measures to try to make the narrative playing field more favorable. These active measures included social-media tactics that were repetitively deployed but seldom successful when executed by the GRU. When the tactics were successful, it was typically because they exploited mainstream media outlets; leveraged purportedly independent alternative media that acts, at best, as an uncritical recipient of contributed pieces; and used fake authors and fake grassroots amplifiers to articulate and distribute the state’s point of view. Given that many of these tactics are analogs of those used in Cold-War influence operations, it seems certain that they will continue to be refined and updated for the internet era, and are likely to be used to greater effect.

One thing you haven’t seen and probably will never see is the SIO saying Facebook is a threat, or that privately-held publishing/advertising companies are a danger to society (e.g. how tobacco companies or oil companies are a danger).