Category Archives: Sailing

US Senator Argues for Jailing Facebook Execs

From a recent interview with Oregon’s Senator Wyden

Mark Zuckerberg has repeatedly lied to the American people about privacy. I think he ought to be held personally accountable, which is everything from financial fines to—and let me underline this—the possibility of a prison term. Because he hurt a lot of people. And, by the way, there is a precedent for this: In financial services, if the CEO and the executives lie about the financials, they can be held personally accountable.

Often in 2018 I made similar suggestions, based on the thought that our security industry would mature faster if a CSO personally can be held liable like a CEO or CFO (e.g. post-Enron SOX requirements):

And at Blackhat this year I met with Facebook security staff who said during the 2016-2017 timeframe the team internally knew the severity election interference and were shocked when their CSO failed to disclose this to the public.

Maybe the Senator putting it all on the CEO today makes some sense strategically…yet also begs the question of whether an “officer” of security was taking payments enough to afford a $3m house in the hills of Silicon Valley while intentionally withholding data on major security breaches during his watch?

Given an appointment of dedicated officer in charge of security, are we meant to believe he was taking a big salary only to be following orders and not responsible personally? Don’t forget he drew press headlines (without qualification) as an “influential” executive joining Facebook, while at the same time leaving Yahoo because he said he wasn’t influential.

To be fair he posted a statement explaining his decision at the time, and it did say that safety is the industry’s responsibility, or his company’s, not his. Should that have been an early warning he wasn’t planning to own anything that went awry?

I am very happy to announce that I will be joining Facebook as their Chief Security Officer next Monday…it is the responsibility of our industry to build the safest, most trustworthy products possible. This is why I am joining Facebook. There is no company in the world that is better positioned to tackle the challenges…

There also is a weird timing issue. The start to the Russian campaign is when Facebook brings on the new CSO. Maybe there’s nothing to this timing, just coincidence, or maybe Russians knew they were looking at an inexperienced leader. Or maybe they even saw him as “coin-operated” (a term allegedly applied to him by US Intelligence) meaning they knew how easily he would stand down or look away:

  1. June 2015: Alex Stamos abruptly exits his first ever CSO role after failing to deliver on year-old promises of end-to-end encryption, and also failing to disclose breaches, to join Facebook as CSO. Journalists later report this as “…beginning in June 2015, Russians had paid Facebook $100,000 to run roughly 3,000 divisive ads to show the American electorate”
  2. October 2015: Zuckerberg tries to shame investigators and claim no internal knowledge… “To think it influenced the election in any way is a pretty crazy”
  3. January 2017: US Intelligence report conclusively states Russia interfered in 2016 election
  4. July 2017: Facebook officially states “we have seen no evidence that Russian actors bought ads on Facebook”
  5. September 2017: Facebook backtracks and admits it knew (without revealing exactly how soon) Russian actors bought ads on Facebook
  6. September 2017: Zuckerberg muddies their admission by saying “…investigating this for many months, and for a while we had found no evidence of fake accounts linked to Russia running ads”, which focuses on knowledge of fake accounts being used, rather than the more important knowledge Russia was running ad campaigns
  7. September 2017: Zuckerberg tries to apologize in a series of PR moves like saying “crazy was dismissive and I regret it” and asking for forgiveness
  8. October 2017: Facebook’s Policy VP issues a “we take responsibility” statement
  9. October 2017: Facebook admits 80,000 posts from 2015 (start of Stamos becoming CSO) to 2017 reached over 120 million people. Stamos brands himself as both the officer in charge with a definitive statement yet also denied a voice who wasn’t allowed to speak. It does somehow come back to the point that the Russian Internet Research Agency allegedly began operations only after Stamos’ joined. Even if it started before, though, he definitely did not disclose what he knew when he knew it. His behavior echoes a failure to disclose massive breaches while he was attempting his first CSO role in Yahoo!

Given the security failures from 2015 to 2017 we have to seriously consider the implications of a sentence that described Stamos’ priors, which somehow are what led him into being a Facebook CSO

At the age of 36, Stamos was the chief technology officer for security firm Artemis before being appointed as Yahoo’s cybersecurity chief in March 2014. In the month of February, Stamos in particular clashed with NSA Director Mike Rogers over decrypting communications, asking whether “backdoors” should be offered to China and Russia if the US had such access.

There are a couple problems with this paragraph, easily seen in hindsight.

First, Artemis wasn’t a security firm in any real sense. It was an “internal startup at NCC Group” and a concept that had no real product and no real customers. As CTO he hired outside contractors to write software that never launched. This doesn’t count as proof of either leadership or technical success, and certainly doesn’t qualify anyone to be an operations leader like CSO of a public company.

Second, nobody in their right mind in technology leadership let alone security would ask if China and Russia are morally equivalent to the United States government when discussing access requests. That signals a very weak grasp of ethics and morality, as well as international relations. I’ve spoken about this many times.

If the U.S. has access it in no way has implied other governments somehow morally are granted the same access. Moreover it was very publicly discussed in 2007 because Yahoo’s CEO was told to not give the Chinese access they requested (when Stamos was 28):

An unusually dramatic congressional hearing on Yahoo Inc.’s role in the imprisonment of at least two dissidents in China exposed the company to withering criticism and underscored the risks for Western companies seeking to expand there. “While technologically and financially you are giants, morally you are pygmies,” Rep. Tom Lantos (D., Calif.)

If anything these two points probably should have disqualified him to become CSO of Facebook, and that’s before we get into his one-year attempt to be CSO at Yahoo! that quickly ended in disaster.

In 2014, Stamos took on the role of chief information security officer at Yahoo, a company with a history of major security blunders. More than one billion Yahoo user accounts were compromised by hackers in 2013, though it took years for Yahoo to publicly report…Some of his biggest fights had to do with disagreements with CEO Marissa Mayer, who refused to provide the funding Stamos needed to create what he considered proper security…

Let me translate. Stamos joined and didn’t do the job disclosing breaches because he was campaigning for more money. He was spending millions (over $2m went into prizes paid to security researchers who reported bugs). While his big-spend bounty-centric program was popular among researchers, it didn’t build trust among customers. This parallels his work as CTO, which didn’t build any customer trust at all.

The kind of statements Stamos made about Artemis launching in the future (never happened) should have been a warning. Clearly he thought taking over a “dot secure” domain name and then renting space to every dot com in the world was a lucrative business model (it wasn’t).

I’m obviously not making this up as you can hear him describe rent-seeking with a straight face. His business model was to use a private commercial entity to collect payments from anyone on the Internet in exchange for a safety flag to hang on a storefront, in a way that didn’t seem to have any fairness authority or logical dispute mechanism.

Here is a reporter trying to put the scheming in the most charitable terms:

In late 2010, iSEC was acquired by the British security firm, NCC Group, but otherwise the group continued operating much as before. Then, in 2012, Stamos launched an ambitious internal startup within NCC called Artemis Internet. He wanted to create a sort of gated community within the internet with heightened security standards. He hoped to win permission to use “.secure” as a domain name and then require that everyone using it meet demanding security standards. The advantage for participants would be that their customers would be assured that their company was what it claimed to be—not a spoof site, for instance—and that it would protect their data as well as possible. The project fizzled, though. Artemis was outbid for the .secure domain and, worse, there was little commercial enthusiasm for the project. “People weren’t that interested,” observes Luta Security’s Moussouris, “in paying extra for a domain name registrar who could take them off the internet if they failed a compliance test.”

Imagine SecurityScorecard owning the right to your domain name and disabling you until you pay them to clean up the score they gave you. Dare I mention that a scorecard compliance engine is full of false positives and becomes a quality burden that falls on the companies being scanned? Again, this was his only ever attempt at being a CTO (before he magically branded himself a CSO) and it was an unsuccessful non-starter, a fizzle, a dud.

From that somehow he pivoted into a publicly traded company as an officer of security. Why? How? He abruptly quit Artemis by taking on a CSO role at Yahoo, demanding millions for concept projects more akin to a CTO than CSO. He even made promises upon taking the CSO role to build features that he never delivered. Although I suppose the greater worry still is that he did not disclose breaches.

It was after all that he wanted to be called CSO again, this time at Facebook. That is what Wyden should be investigating. I mean I’m fine with Wyden making a case for the CEO to be held accountable as a starting point, the same way we saw Jeff Skilling of Enron go to jail.

It makes me wonder aloud again however if the CFO of Enron, Andrew Fastow, pleading guilty in 2004 to two counts of conspiracy to commit securities and wire fraud…is an important equivalent to a CSO of Facebook pleading guilty to a conspiracy to commit breach fraud.

Stamos says he deserves as much blame as anyone else for Facebook being slow to notice and stamp out Russian meddling in the 2016 presidential election

Ironically Stamos, failing to get anywhere with his three attempts at leadership (Artemis, Yahoo and Facebook) has now somehow reinvented himself (again with no prior experience) as an ethics expert. He has also found someone to fund his new project to the tune of millions, which at Blackhat some Facebook staff reported to me was his way to help Facebook avoid regulations by laundering their research as “academic”.

It will be interesting to see if Wyden has anything to say about a CSO being accountable in the same ways a CFO would be, or if focus stays on the CEO.

In any case, after a year of being CSO at Yahoo and three years of being CSO at Facebook, Stamos’ total career amassed only four years as a head of security.

Those four years unmistakably will be remembered as one person who sat on some of the biggest security operations lapses in history. And his 2015 tout he was taking an officer role because “no company in the world is better positioned” to handle challenges of safety continues to produce this legacy instead:

Another month, another Facebook data breach.


Update September 7th, 2019:

In another meeting with ex-Facebook staff I was told when “CEO and CSO are nice people” it should mean they don’t go to jail for crimes, because nice people shouldn’t go to jail. That perspective makes me wonder what people would say if I told them Epstein had a lot of friends who said he was nice. I mean it suggests to me a context change might help. I first will raise the issue in my CS ethics lectures with an example outside the tech industry: Should the captain of sunken ship face criminal investigation for saving self as 34 passengers died in an early morning fire?

A Sailor-Historian-Technologist Perspective on the Boeing 737 MAX Disaster

The tragedy of Boeing’s 737 product security decisions create a sad trifecta for someone interested in aeronautics, lessons from the past, and risk management.

First, there was a sailor’s warning.

We know Boeing moved a jet engine into a position that fundamentally changed handling. This was a result of Airbus ability to add a more efficient engine to their popular A320. The A320 has more ground clearance, so a larger engine didn’t change anything in terms of handling. The 737 sits lower to the ground, so changing to a more efficient engine suddenly became a huge design change.

Here’s how it unfolded. In 2011 Boeing saw a new Airbus design as a direct threat to profitability. A sales-driven rush meant efficiency became a critical feature for their aging 737 design. The Boeing perspective on the kind of race they were in was basically this:

Boeing had to solve for a plane much closer to the ground, while achieving the same marketing feat of Airbus, which said the efficiency didn’t change a thing (thus no costly pilot re-training). This is where Boeing made the critical decision to push their engine design forward and up on the wing,…while claiming that pilots did not need to know anything new about handling characteristics.

60 minutes in Australia illustrated the difference in their segment called “Rogue Boeing 737 Max planes ‘with minds of their own’” (look carefully on the left and it says TOO BIG next to the engine):


Don’t ask me why an Australian TV show didn’t call their segment “Mad Max”.

And that is basically why handling the plane was different, despite Boeing’s claims that their changes weren’t significant, let alone safety-related. The difference in handling was so severe (risk of stall) that Boeing then doubled-down with a clumsy software hack to flight control systems to secretly handle the handling changes (as well as selling airlines an expensive sensor “disagree” light for pilots, which the downed planes hadn’t purchased)

An odd twist to this story is that it was American Airlines who kicked off the Boeing panic about sales with a 2011 order for several hundred new A320. See if you can pick up a more forward and higher engine design in this illustration handed out to passengers.

I added this into the story because note again how Boeing wanted to emphasize “identical” planes yet marketed them heavily as different for even an in-flight magazine given to every passenger. It stands in contrast to how that same airline’s pilots were repeatedly told by Boeing the two planes held no differences in flight worth highlighting in documentation.

To make an even finer point, the Airbus A320 in that same airline magazine doesn’t have a sub-model.

While this engine placement clearly had been approved by highly-specialized engineering management thinking short-term (about racing through FAA compliance), who was thinking about serious instability long-term as a predictable cost?

The emerging safety problems led to a series of shortcut hacks and partial explanations that attempted to minimize talk about stabilizing or training for new flow characteristics, rather than admit huge long-term implications (deaths).

Boeing Knew About Safety-Alert Problem for a Year Before Telling FAA, Airlines

The Seattle Times posted clear evidence of pilots fighting against their own ship, unaware of reasons it was fighting with them.

Anyone who sails, let alone flies airplanes, immediately can see the problem in calling a 737 “Mad Max” the same as a prior 737 design, when flow handling has changed — one doesn’t just push a keel or mast around without direct tiller effects.

Some pilots say unofficially they knew the 737 “Mad Max” was not the same and, at least in America, were mentally preparing themselves for how to react to a defective system. Officially however pilots globally needed to be warned clearly and properly, as well as trained better on the faulty software that would fight with them for safe control of the aircraft.

Second, America has a “Widowmaker” precedent.

Years ago I wrote about pilot concerns with a plane of WWII, the crash-prone B-26.

The B-26 had a high rate of accidents in takeoff and landing until crews were trained better and the aspect ratio modified on its wings/rudder

That doesn’t tell the whole story, though. In terms of history repeating itself, evidence mounted this American airplane was manifestly unsafe to fly and the manufacturer wasn’t inclined to proactively fix and save lives.

A biographer of Truman gives us some details from 1942 Senate hearings, foreshadowing the situation today with Boeing.

Apparently crashes of the Martin B-26 were happening at least every month and sometimes every other day. Yes, crashes were literally happening 15 days out of 30 and the plane wasn’t grounded.

The Martin company in response to concerns started a PR campaign to gloat about how one of its aircraft actually didn’t kill everyone on board and received blessings from Churchill.

Promoting survivorship should be recognized today as a dangerously and infamously bad data tactic. Focusing on economics of Boeing is the right thing here. They haven’t stooped yet to Martin’s survivorship bias campaign, but it does seem that Boeing knowingly was putting lives at risk to win a marketing and sales battle with a rival, similar to what Tesla could be accused of doing.

Third, there are broad societal issues from profitable data integrity flaws.

Can we speak openly yet about the executives making money on big data technology with known integrity flaws that kill customers?

There’s really a strange element to this story from a product management decision flow. Nobody should want to end up where we are at today with this issue.

Boeing knew right away its design change impacted the handling of the product. They then added fixes in, without notifying their customers responsible for operating the product of the severity of a fix failure (crash).

I believe this is where and why the expanding number of investigations are being cited as “criminal” in nature.

  • Investigation of development and certification of the Boeing 737 MAX by the FAA and Boeing, by DoJ Fraud Section, with help from the FBI and the DoT Inspector General
  • Administrative investigation by the DoT Inspector General
  • DoT Inspector General hearings
  • FAA review panel on “certification of the automated flight-control system on the Boeing 737 MAX aircraft, as well as its design and how pilots interact with it”
  • Congressional investigation of “status of the Boeing 737 MAX” for US House Transportation and Infrastructure Committee’s Transportation and Infrastructure Committee

These investigations seem all to be getting at the sort of accountability I’ve been saying needs to happen for Facebook, which also suffered from integrity flaws in its product design. Will a top executive eventually be named? And will there be wider impact to engineering and manufacturing ethics in general? If the Grover Shoe Factory disaster is any indication, the answers should be yes.

In conclusion, if change in design is being deceptively presented, and the suffering of those impacted is minimized (because profits, duh), then we’re approaching a transportation regulatory moment that really is about software engineering. What may emerge is these software-based transportation risks, because fatalities, will bring regulation for software in general.

Even if regulation isn’t coming, the other new reality is buyers (airlines, especially outside the US and beyond the FAA) will do what Truman suggested in 1942: cancel contracts and buy from another supplier who can pass transparency/accountability tests.

Fruit Fly Movements Imitated by Giant Robot Brain Controlled by Humans

They say fruit flies like a banana, and new science may now be able to prove that theory because robot brains have figured out that to the vector go the spoils.

The Micro Air Vehicle Lab (MAVLab) has just published their latest research

The manoeuvres performed by the robot closely resembled those observed in fruit flies. The robot was even able to demonstrate how fruit flies control the turn angle to maximize their escape performance. ’In contrast to animal experiments, we were in full control of what was happening in the robot’s ”brain”.

Can’t help but notice how the researchers emphasize getting away from threats with “high-agility escape manoeuvres” as a primary motivation for their work, which isn’t bananas. In my mind escape performance translates to better wind agility and therefore weather resilience.

The research also mentions the importance of rapidly deflating costs in flying machines. No guess who would really need such an affordable threat-evading flying machine.

I mean times really have changed since the 1970s when

Developed by CIA’s Office of Research and Development in the 1970s, this micro Unmanned Aerial Vehicle (UAV) was the first flight of an insect-sized aerial vehicle (Insectothopter). It was an initiative to explore the concept of intelligence collection by miniaturized platforms.

The Insectothopter was plagued by inability to fly in actual weather, as even the slightest breeze would render it useless. In terms of lessons learned, the same problems cropped up with Facebook’s (now cancelled) intelligence collection by elevated platform.

On June 28, 2016, at 0743 standard mountain time, the Facebook Aquila unmanned aircraft, N565AQ, experienced an in-flight structural failure on final approach near Yuma, Arizona. The aircraft was substantially damaged. There were no injuries and no ground damage. The flight was conducted under 14 Code of Federal Regulations Part 91 as a test flight; the aircraft did not hold an FAA certificate of airworthiness.

Instead of getting into the “airworthiness” of fruit flies, I will simply point out that “final approach” is where the winds blow and the damage occurred. If only Facebook had factored in some escape performance maximization to avoid the ground hitting them so dangerously when they landed.

Lessons in Secrets Management from a Navy SEAL

Good insights from these two paragraphs about the retired Rear Admiral Losey saga:

Speaking under oath inside the Naval Base San Diego courtroom, Little said that Losey was so scared of being recorded or followed that when the session wrapped up, the SEAL told the Navy investigator to leave first, so he couldn’t identify the car he drove or trace a path back to his home.

[…]

…he retaliated against subordinates during a crusade to find the person who turned him in for minor travel expense violations.

2018 AppSec California: “Unpoisoned Fruit: Seeding Trust into a Growing World of Algorithmic Warfare”

My latest presentation on securing big data was at the 2018 AppSec California conference:

When: Wednesday, January 31, 3:00pm – 3:50pm
Where: Santa Monica
Event Link: Unpoisoned Fruit: Seeding Trust into a Growing World of Algorithmic Warfare

Artificial Intelligence, or even just Machine Learning for those who prefer organic, is influencing nearly all aspects of modern digital life. Whether it be financial, health, education, energy, transit…emphasis on performance gains and cost reduction has driven the delegation of human tasks to non-human agents. Yet who in infosec today can prove agents worthy of trust? Unbridled technology advances, as we have repeatedly learned in history, bring very serious risks of accelerated and expanded humanitarian disasters. The infosec industry has been slow to address social inequalities and conflict that escalates on the technical platforms under their watch; we must stop those who would ply vulnerabilities in big data systems, those who strive for quick political (arguably non-humanitarian) power wins. It is in this context that algorithm security increasingly becomes synonymous with security professionals working to avert, or as necessary helping win, kinetic conflicts instigated by digital exploits. This presentation therefore takes the audience through technical details of defensive concepts in algorithmic warfare based on an illuminating history of international relations. It aims to show how and why to seed security now into big data technology rather than wait to unpoison its fruit.

Copy of presentation slides: UnpoisonedFruit_Export.pdf

Did a Spitfire Really Tip the Wing of V1?

Facebook has built a reputation for being notoriously insecure, taking payments from attackers with little to no concern for the safety of its users; but a pattern of neglect for information security is not exactly the issue when a finance guy in Sydney, Australia gives a shout-out to a Facebook user for what he calls an “amazing shot” in history:

As anyone hopefully can see, this is a fake image. Here are some immediate clues:

  1. Clarity. What photographic device in this timeframe would have such an aperture let alone resolution?
  2. Realism. The rocket exhaust, markings, ground detail…all too “clean” to be real. That exhaust in particular is an eyesore
  3. Positioning. Spitfire velocity and turbulence relative to V1 is questionable, so this overlapped steady formation is unlikely
  4. Vantage point. Given positioning issue, photographer close position aft of Spitfire even less likely

That’s only a quick list to make a solid point this is a fabrication anyone should be able to discount at first glance. In short, when I see someone say they found an amazing story or image on Facebook there’s a very high chance it’s toxic content meant to deceive and harm, much in the same way tabloid stands in grocery stores used to operate. Entertainment and attacks should be treated as such, not as realism or useful reporting.

Now let’s dig a little deeper.

In 2013 an “IAF Veteran” posted a shot of a Spitfire tipping a V1. This passes many of the obvious tests above. Unfortunately he also inserts some nonsense in the text about the dangers of firing bullets and reliably blowing up a V1 in air, far away from civilians, versus sending it unpredictably to ground. Ignore that patently false analysis (shooting remained the default) and revel instead in period photographic image quality:

Several years then pass by, and nobody talks about V1 tipping, until only a few weeks ago a “Military aviation art” account posts a computer rendered image with the comment “Part of a new work depicting the first tipping of a V-1 flying bomb with a wing tip. Who achieved this?”.

Shame this artist’s tweet with the image wasn’t given proper and full credit by the Sydney finance guy, as it would have made far more sense to have a link to the artist talking about their “new work” or even their gallery and exact release dates:

Who indeed? The artist answers their own question in their own next tweet, writing

First to physically tip a V1 bomb was Ken Collier, 91 Squadron, in a Spitfire MkIVX. He scored 7 V1 victories and was later KIA. #WWII #WW2.

On the bright side the artist answers their own question with some real history, worth researching further. On the dark side the artist’s answer also sadly omits any link to original source or reference material, let alone the (attempted) realism found above in that “IAF veteran” tweet with an actual photograph.

The artist simply says it is based on a real event, and leaves out the actual photograph (perhaps to avoid acknowledging the blurry inspiration to their art) while including a high-resolution portrait photo of the pilot who achieved it. Kind of misleading to have that high-resolution photograph of Ken Collier sitting on the ground, instead of the one the IAF Veteran tweeted of a Spitfire in flight.

The more complete details of this story not only are worth telling, they put the artist’s high-resolution fantasy reconstruction of a grainy blotchy image into proper context. Fortunately “V1 Flying Bomb Aces by Andrew Thomas” is also online and tells us through first-person accounts of a squadron diary what really happened (notice both original photographs together in this book, the plane and the pilot).

Normally a V1 would be shot down, as you can see in this Popular Mechanics article describing hundreds destroyed in 1944 by 22mm cannons on the Tempest.

Popular Mechanics Feb 1945

Just to make it absolutely clear, since Popular Mechanics doesn’t specify shooting or tipping, here’s a log from Ace pilots who downed V1; it not only describes debris after explosions was a known risk to be avoided, leading to some gun modifications for longer-ranges, it also characterizes tipping as unusual and low frequency like at the end of a run (gun jammed).

Excerpted from “V1 Flying Bomb Aces” by Andrew Thomas

In the case of the artist rendering that started this blog post, a Spitfire pilot found himself firing until out of ammo. He became frustrated without ammo so decided to tip a wing of the V1. Shooting the V1 was preferred, as it would explode in air and kill far fewer than being tipped to explode on ground. Only because he ran out of bullets, and in a frustrated innovative state, did he decide to tip…of course later there would be others, but the total tipped this way was in the dozens out of the many thousands destroyed.

Does finance guy in Sydney feel accountable for claiming a real event in an artist’s fantasy image? Of course not, because he has been responding to people that he thinks it still is a fine representation of a likely event and he doesn’t measure any harm from confusion caused; he believes harm he has done still doesn’t justify him making a correction.

Was he wrong to misrepresent and should he delete his “amazing shot” tweet and replace with one that says amazing artwork or new rendering? Yes, it would be the sensible thing if he cares about history and accuracy, but the real question is centered around the economics of why he won’t change. Despite being repeatedly made aware that he has become a source of misinformation, the cost of losing “likes” probably weighs heavier on him than the cost of damaging integrity.

Could truck drivers lose their jobs to robots?

Next time you bang on a vending machine for a bottle that refuses to fall into your hands, ask yourself if restaurants soon will have only robots serving you meals.

Maybe it’s true there is no future for humans in service industries. Go ahead, list them all in your head. Maybe problems robots have with simple tasks like dropping a drink into your hands are the rare exceptions and the few successes will become the norm instead.

One can see why it’s tempting to warn humans not to plan on expertise in “simple” tasks like serving meals or tending a bar…take the smallest machine successes and extrapolate into great future theories of massive gains and no execution flaws or economics gone awry.

Just look at cleaning, sewing and cooking for examples of what will be, how entire fields have been completely automated with humans eliminated…oops, scratch that, I am receiving word from my urban neighbors they all seem to still have humans involved and providing some degree of advanced differentiation.

Maybe we should instead look at darling new startup Blue Apron, turning its back on automation, as it lures millions in investments to hire thousands of humans to generate food boxes. This is such a strange concept of progress and modernity to anyone familiar with TV dinners of the 1960s and the reasons they petered out.

Blue Apron’s meal kit service has had worker safety problems

Just me or is anyone else suddenly nostalgic for that idyllic future of food automation (everything containerized, nothing blended) as suggested in a 1968 movie called “2001”…we’re 16 years late now and I still get no straw for my fish container?

2001 prediction of food

I don’t even know what that box on the top right is supposed to represent. Maybe 2001 predicted chia seed health drinks.

Speaking of cleaning, sewing and cooking with robots…someone must ask at some point why much of automation has focused on archetypal roles for women in American culture. Could driverless tech be targeting the “soccer-mom” concept along similar lines; could it arguably “liberate” women from a service desired from patriarchal roles?

Hold that thought, because instead right now I hear more discussion about a threat from robots replacing men in the over-romanticized male-dominated group of long-haul truckers. (Protip: women are now fast joining this industry)

Whether measuring accidents, inspections or compliance issues, women drivers are outperforming males, according to Werner Enterprises Inc. Chief Operating Officer Derek Leathers. He expects women to make up about 10 percent of the freight hauler’s 9,000 drivers by year’s end. That’s almost twice the national average.

The question is whether American daily drivers, of which many are professionals in trucks, face machines making them completely redundant just like vending machines eliminating bartenders.

It is very, very tempting to peer inside any industry and make overarching forecasts of how jobs simply could be lost to robots. Driving a truck on the open roads, between straight lines, sounds so robotic already to those who don’t sit in the driver’s seat. Why has this not already been automated, is the question we should be answering rather than how soon will it happen.

Only at face value does driving present a bar so low (pun not intended) machines easily could take it over today. Otto of the 1980 movie “Airplane” fame comes to mind for everyone I’m sure, sitting ready to be, um, “inflated” and take over any truck anywhere to deliver delicious TV dinners.

Otto smokes a cig

Yet when scratching at barriers, maybe we find trucking is more complicated than this. Maybe there could be more to human processes, something really intelligent, than meets a non-industry specific robotic advocate’s eye?

Systems that have to learn, true robots of the future, need to understand a totality of environment they will operate within. And this begs the question of “knowledge” about all tasks being replaced, not simply the ones we know of from watching Hollywood interpretations of the job. A common mistake is to underestimate knowledge and predict its replacement with an incomplete checklist of tasks believed to point in the general direction of success.

Once the environmental underestimation mistake is made another mistake is to forecast cost improvements by acceleration of checklists towards a goal of immediate decision capabilities. We have seen this with bank ATMs, which actually cost a lot of money to build and maintain and never achieved replacement of teller decision-trees; even more security risks and fraud were introduced that required humans to develop checklists and perform menial tasks to maintain ATMs, which still haven’t achieved full capability. This arguably means new role creation is the outcome we should expect, mixed with modest or even slow decline of jobs (less than 10% over 10 years).

Automation struggles at eliminating humans completely because of the above two problems (need for common sense and foundations, need for immediate decision capabilities based on those foundations) and that’s before we even get to the need for memory and a need for feedback loops and strategic thinking. The latter two are essential for robots replacing human drivers. Translation to automation brings out nuances in knowledge that humans excel in as well as long-term thoughts both forwards and backwards.

Machines are supposed to move beyond limited data sets and be able to increase minimum viable objectives above human performance, yet this presupposes success at understanding context. Complex streets and dangerous traffic situations are a very high bar to achieve, so high they may never be reached without human principled oversight (e.g. ethics). Without deep knowledge of trucking in its most delicate moments the reality of driver replacement becomes augmentation at best. Unless the “driver” definition changes, goal posts are moved and expectations for machines are measured far below full human capability and environmental possibility, we remain a long way from replacement.

Take for example the amount of time it takes to figure out risk of killing someone in an urban street full of construction, school and loading zones. A human is not operating within a window 10 seconds from impact because they typically aim to identify risks far earlier, avoiding catastrophes born from leaving decisions to last-seconds.

I’m not simply talking about control of the vehicle, incidentally (no pun intended), I also mean decisions about insurance policies and whether to stay and wait for law enforcement to show up. Any driver with rich experience behind the wheel could tell you this and yet some automation advocates still haven’t figured that out, as they emphasize sub-second speed of their machines is all they need/want for making decisions, with no intention to obey human imposed laws (hit-and-run incidents increased more than 40% after Uber was introduced to London, causing 11 deaths and 5,000 injuries per year).

For those interested in history we’re revisiting many of the dilemmas posed the first time robotic idealism (automobiles) brought new threat models to our transit systems. Read a 10 Nov 1832 report on deaths caused by ride share services, for example.

The Inquest Jury found a verdict of man- slaughter against the driver,—a boy under fifteen years of age, and who appeared to have erred more from incapacity than evil design; and gave a deodand of 50/. against the horse and cabriolet, to mark their sense of the gross impropriety of the owner in having in- trusted the vehicle to so young and inexperienced a person.

1896 London Public CarriagesYoung and inexperienced is exactly what even the best “learning” machines are today. Sadly for most of 19th Century London authorities showed remarkably little interest in shared ride driving ability. Tests to protect the public from weak, incapacitated or illogical drivers of “public carriages” started only around 1896.

Finding balance between insider expertise based on experience and outsider novice learner views is the dialogue playing out behind the latest NHTSA automation scales meant to help regulate safety on our roads. People already are asking whether costs to develop systems that can go higher than “level three” (cede control under certain conditions and environments) autonomous vehicle are justified. That third level of automation is what typically is argued by outsiders to be the end of the road for truck drivers (as well as soccer moms).

The easy answer to the third level is no, it still appears to be years before we can SAFELY move above level three and remove humans in common environments (not least of all because hit-and-run murder economics heavily favoring driverless fleets). Cost reductions today through automation make far more sense at the lower ends of the scale where human driver augmentation brings sizable returns and far fewer chances of disaster or backlash. Real cost, human life error, escalates quickly when we push into a full range of even the basic skills necessary to be a safe driver in every environment or any street.

There also is a more complicated answer. By 2013 we saw Canadian trucks linking up in Alberta’s open road and using simple caravan techniques. Repeating methods known for thousands of years, driver fatigue and energy costs were significantly dropped though caravan theory. Like a camel watching the tail of one in front through a sandstorm…. In very limited private environments (e.g. competitions, ranches, mines, amusement parks) the cost of automation is less and the benefits realized early.

I say the answer is complicated because level three autonomous vehicle still must have a human at the controls to take over, and I mean always. The NHTSA has not yet provided any real guidance on what that means in reality. How quickly a human must take-over leaves a giant loophole in defining human presence. Could the driver be sleeping at the controls, watching a movie, or even reposing in the back-seat?

The Interstate system in America has some very long-haul segments with traffic flowing at similar speed with infrequent risk of sudden stop or obstruction. Tesla, in their typically dismissive-of-safety fashion despite (or maybe because of) their cars repeatedly failing and crashing, called major obstructions on highways a “UFO” frequency event.

Cruise control and lane-assist in pre-approved and externally monitored safe-zones in theory could allow drivers to sleep as they operate, significantly reducing travel times. This is a car automation model actually proposed in the 1950s by GM and RCA, predicted to replace drivers by 1974. What would the safe-zone look like? Perhaps one human taking over the responsibility by using technology to link others, like a service or delegation of decision authority, similar to air traffic control (ATC) for planes. Tesla is doing this privately, for those in the know.

Ideally if we care about freedom and privacy, let alone ethics, what we should be talking about for our future is a driver and a co-pilot taking seats in the front truck of a large truck caravan. Instead of six drivers for six trucks, for example, you could find two drivers “at the controls” for six trucks connected by automation technology. This is powerful augmentation for huge cost savings, without losing essential control of nuanced/expert decisions in myriad local environments.

This has three major benefits. First, it helps with the shortage of needed drivers, mentioned above being filled by women. Second it allows robot proponents to gather real-world data with safe open road operations. Third, it opens the possibility of job expansion and transitions for truckers to drone operations.

On the other end of the spectrum from boring unobstructed open roads, in terms of driverless risks, are the suburban and urban hubs (warehouses and loading docks) that manage complicated truck transactions. Real human brain power still is needed for picking up and delivering the final miles unless we re-architect the supply-chain. In a two-driver, six-truck scenario this means after arriving at a hub, trucks return to one driver one truck relationship, like airplanes reaching an airport. Those trucks lacking human drivers at the controls would sit idle in queue or…wait for it…be “remotely” controlled by the locally present human driver. The volume of trucks (read percentage “drones”) would increase significantly as number of drivers needed might actually decline only slightly.

Other situations still requiring human control tend to be bad weather or roads lacking clear lines and markings. Again this would simply mean humans at the controls of a lead vehicle in a caravan. Look at boats or planes again for comparison. Both have had autopilots far longer, at least for decades, and human oversight has yet to be cost-effectively eliminated.

Could autopilot be improved to avoid scenarios that lead to disaster, killing their human passengers? Absolutely. Will someone pay for autopilots to avoid any such scenarios? Hard to predict. For that question it seems planes are where we have the most data to review because we treat their failures (likely due to concentrated loss of life) with such care and concern.

There’s an old saw about Allied bombers of WWII being riddled with bullet holes yet still making it back to base. After much study the Air Force put together a presentation and told a crowded room that armor would be added to all the planes where concentrations of holes were found. A voice in back of the crowd was heard asking “but shouldn’t you put the armor where the holes aren’t? Where are the holes on planes that didn’t come back”.

It is time to focus our investments on collecting and understanding failures to improve driving algorithms of humans, by enhancing the role of drivers. The truck driver already sits on a massively complex array of automation (engines and networks) so adding more doesn’t equate to removing the human completely. Humans still are better at complex situations such as power loss or reversion to manual controls during failures. Automation can make both flat open straight lines into the sunset more enjoyable, as well as the blizzard and frozen surface, but only given no surprises.

Really we need to be talking about enhancing drivers, hauling more over longer distance with fewer interruptions. Beyond reduced fatigue and increased alertness with less strain, until systems move above level three automation the best-case use of automation is still augmentation.

Drivers could use machines for making ethical improvements to their complex logistics of delivery (less emissions, increased fuel efficiency, reduced strain on the environment). If we eliminate drivers in our haste to replace them, we could see fewer benefits and achieve only the lowest-forms of automation, the ones outsiders would be pleased with while those who know better roll their eyes with disappointment.

Or maybe Joe West & the Sinners put it best in their classic trucker tune “$2000 Navajo Rug

I’ve got my own chakra machine, darlin’,
made out of oil and steel.
And it gives me good karma,
when I’m there behind the wheel

2016 BSidesLV Ground Truth Keynote: Great Disasters of Machine Learning

I presented the Ground Truth Keynote at the 2016 BSidesLV conference:

Great Disasters of Machine Learning: Predicting Titanic Events in Our Oceans of Math

When: Wednesday, August 3, 10:00 – 10:30
Where: Tuscany, Las Vegas
Cost: Free (as always!)
Event Link: ground-truth-keynote-great-disasters-of-machine-learning

This presentation sifts through the carnage of history and offers an unvarnished look at some spectacular past machine learning failures to help predict what catastrophes may lay ahead, if we don’t step in. You’ve probably heard about a Tesla autopilot that killed a man…

Humans are great at failing. We fail all the time. Some might even say intelligence is so hard won and infrequent let’s dump as much data as possible into our “machines” and have them fail even faster on our behalf at lower cost or to free us. What possibly could go wrong?

Looking at past examples, learning from failures, is meant to ensure we avoid their repetition. Yet it turns out when we focus our machines narrowly, and ignore safety decision controls or similar values, we simply repeat avoidable disasters instead of achieving faster innovations. They say hindsight is 20-20 but you have to wonder if even our best machines need corrective lenses. At the end of the presentation you may find yourself thinking how easily we could have saved a Tesla owner’s life.

Copy of Presentation Slides: 2016BSidesLV.daviottenheimer.pdf (8 MB)

Full Presentation Video:

Some of my other BSides presentations:

How We Could Use Cyber Letters of Marque

Rick Holland pointed out today that Dave Aitel last April wrote an article “US Steel demonstrates why we need Cyber Letters of Marque

…while economic competitiveness is at some level a strategic need, the particular defense of a US Company is not something the NSA can and should prioritize. The answer to this problem is allowing private companies to offer their services under strict law enforcement and intelligence community oversight to perform the actions needed, including remote intrusion, data exfiltration and analysis, that would allow US Steel and the US Government to build a rock-solid case for criminal liability and sanctions. In that sense, cyber Letters of Marque are more similar to private investigator licensing than privateer licensing.

To me this misses the real point of letters of marque. An extension of government services under license is approaching the for-hire contract system as used already. The infamous Blackwater company, for example, implemented privatized security services.

We are trying to do for the national security apparatus what FedEx did for the Postal Service

Let me set aside a US-centric perspective for a moment, given that it has not ratified the 1856 Declaration of Paris signed by 55 states to formally outlaw privateers. Arguably this is because American leaders thought they never would want or have a standing military and thus would rely on privateers for self-defense against established European armies. The Constitution Article 1, Section 8 still has letters of marque as an enumerated power of Congress.

To declare War, grant Letters of Marque and Reprisal, and make Rules concerning Captures on Land and Water;

To raise and support Armies, but no Appropriation of Money to that Use shall be for a longer Term than two Years;

Note that 2 year limit on funding Armies. US Congress right now can issue a letter of marque to private entities, who would be given neither funding nor oversight, so they can submit prizes won to a court for judicial determination.

On a more global note what really we ought be talking about here is how someone wronged directly can take action, akin to self-defense or hiring a body-guard, when their government says an organized defense is unavailable. A letter of marque thus would be offered as license to defend self in consideration of a court after-the-fact, where a government entity can not help.

In historic terms (before 1855) any authority might issue a letter to “privateers”; spoils of enemies found were to be brought back to that issuer’s court for settlement. Upon seizing goods the privateer returned to an admiralty or authority for assessment in what we might call a “spoils court”.

An excellent example of this was when two ships with American flags attacked a British ship because at war. A fourth ship sailed late into this battle flying a British flag and chased away the two American ships. Sounds like a simple case of British nation-state defending self against two American privateers, right?

No, this fourth ship then dropped its British flag, raised an American one, and scuttled the already heavily damaged British ship that it had pretended to defend. Now acting as an American privateer it could enter an American port alone with enemy spoils as a “patriotic” duty under a letter of marque. Had the fourth ship simply helped the other two American ships a spoils court would have awarded at most a third of the full sum it received.

The use of an authority for judgment of spoils and settlement is what distinguishes the “patriotic” privateers from pirates who operated independently and eschewed judgment by larger global organizations (pirates often were those who had left working for large organizations and set out on their own specifically to escape unjust/unhealthy treatment).

So I say letters of marque have a different and more controversial spin from the licensing or even a contractor model mentioned above in Aitel’s post:

…allowing private companies to offer their services under strict law enforcement and intelligence community oversight to perform the actions needed…

Strict oversight? What also we must consider is issuing letters to companies wronged that will not have strict oversight (because cost/complexity). How can we allow self-defense, a company to legally take action against their “enemies”, using after-the-fact oversight in courts?

We seek to maintain accountability while also releasing obligation for funding or strict coordination by an authority. This takes us into a different set of ethics concerns versus a system of strict oversight, as I illustrated with the American ship example above. Ultimately the two wronged American ships had recourse. They sued the fourth ship for claiming spoils unfairly, since it arrived late in the battle. Courts ruled in their favor, giving them their “due”.

Here’s a simple example in terms of US Steel:

The US government finds itself unable to offer any funds or oversight for a response to attack reported by US Steel. Instead the government issues a letter of marque. US Steel itself, or through private firms it contracts, finds and seizes the assets used by its attackers. Assets recovered and details of case are submitted to court, which judges their actions. Spoils in modern terms could mean customers, IP or even infrastructure.

In other words, if US Steel finds 90% of IP theft is originating from a specific service provider, and a “take over” of that provider would stop attacks, the courts could rule after US Steel defends itself that seized provider assets (e.g. systems and their networks found with IP stolen from US Steel) are a “prize” for US Steel.

It’s not a clear-cut situation, obviously, because it’s opening the possibility of powerful corporations seizing assets from anyone they see and think they can take. That would be piracy. Instead accountability for prizes is considered by authority of courts, to reduce abuse of letters.

American Pro-Slavery History Markers

Charlotte, North Carolina, has a history marker that I noticed while walking on the street.

It is in need of major revision. Let me start at the end of the story first. A search online found a “NC Markers” program with an entry for L-56 CONFEDERATE NAVY YARD.

Closer to the end of the war…tools and machinery from the yard were moved from Charlotte to Lincolnton. Before the yard could be reassembled and activated in Lincolnton, the war ended. After the war the yard’s previous landowner, Colonel John Wilkes, repossessed the property, for which the Confederate government had never paid him. Where the Confederate Navy Yard once operated, he established Mecklenburg Iron Works. It operated from 1865 until 1875 when it burned.

Note the vague “the war ended” sentence. This supposedly historic account obscures the simple context of the Confederates losing the war. I find that extremely annoying.

To make the problem more clear, compare the above L-56 official account with the UNC Charlotte Special Collections version of the same history:

The exact date of the formation of the Mecklenburg Iron Works is unknown, as is ownership of the firm until its purchase in 1859 by Captain John Wilkes. There is evidence, though, that the firm existed as early as 1846. The son of Admiral Charles Wilkes, John was graduated first in his class at the U.S. Naval Academy in 1847. Following a stint in the U.S. Navy, Wilkes married and moved to Charlotte in 1854. Two years after he purchased the iron works, the Confederate government took it over and used it as a naval ordnance depot. After the Civil War, Wilkes regained possession of the Iron Works, which he operated until his death in 1908. His sons, J. Renwick and Frank, continued the business until 1950, when they sold it to C. M. Cox and his associates.

So many things to notice here:

  1. There was a Captain John Wilkes, not Colonel, although neither story says for which side he fought. An obituary lists him as U.S. Navy and says he was active during Civil War
  2. Captain John Wilkes was the son of infamous Union Navy Admiral Charles Wilkes, who was given a court-martial in 1864. Was John, son, fighting for the North with father, or South against him?
  3. There is evidence these Iron Works were established long before the Civil War. NC Markers says “as early as 1846”. The Charlotte library says Vesuvius Furnace, Tizrah Forge and Rehoboth Furnace were operating 35 years earlier, with a picture of the Mecklenburg Iron Works to illustrate 1810.(1)
  4. Wilkes was not just “yard’s previous landowner”, he ran an iron works two years before the Confederate government took possession of it. Did he lose it as he went to fight for the North, or did he give it to help fight for the South? Seems important to specify yet no one does. In any case the iron works was pre-established, used during Civil War and continued on afterwards

The bigger question of course is who cares that there is a Confederate Navy yard in Charlotte, North Carolina? Why was a sign created in 1954 to commemorate the pro-slavery military?

Taking a picture of the sign meant I could show it to an executive business woman I met in Charlotte, and I asked her why it was there. She told me “Democrats put up that sign for their national convention”. She gave this very strangely political answer about the Democrats in her very authoritative voice while being completely wrong. She ended with an explanation that there was no mention of slavery because (yelling at me and walking away) “CIVIL WAR WAS ABOUT TAXES, NOT SLAVERY. I KNOW MY HISTORY”.

I found this also very annoying. Apparently white educated elites in North Carolina somehow have come to believe Civil War was not about slavery. She was not the only one to say this.

What actually happened, I found with a little research, was the North Carolina Highway Historical Marker Program started in 1935. They put up the signs, with no mention of Democrats of political conventions, as you can tell from the link I gave at the start of this post.

Here is how the NC Markers program explains the official purpose of a CONFEDERATE NAVY YARD sign on the street:

For residents the presence of a state marker in their community can be a source of pride

Source of pride.

Honestly I do not see what they are talking about. What are people reading this sign meant to be proud of exactly? Is a failed attempt by pro-slavery military to create a Navy a proud moment? Confederate yards failed apparently because of huge shortages in raw materials and labor, which ultimately were because of failures in leadership. That is pride material?

What am I missing here?

The sign is dated as 1954. Why this date? It was the year the U.S. Supreme Court struck down “separate but equal” doctrine, opening the door for the civil rights movement. It was the year after Wilkes oldest surviving child died. Does a pro-slavery military commemoration sign somehow make more sense in 1954 (city thumbing nose at Supreme Court or maybe left in will of Wilkes last remaining child) than it does in 2016?

A petition at the University of Mississippi to change one of their campus monuments explains the problem with claiming this as a pride sign:

Students and faculty immediately objected to this language, which 1) failed to acknowledge slavery as the central cause of the Civil War, 2) ignored the role white supremacy played in shaping the Lost Cause ideology that gave rise to such memorials, and 3) reimagined the continued existence of the memorial on our campus as a symbol of hope.

[…]

From the 1870s through the 1920s, memorial associations erected more than 1,000 Confederate monuments throughout the South. These monuments reaffirmed white southerners’ commitment to a “Lost Cause” ideology that they created to justify Confederate defeat as a moral victory and secession as a defense of constitutional liberties. The Lost Cause insisted that slavery was not a cruel institution and – most importantly – that slavery was not a cause of the Civil War.

Kudos to the Mississippi campaign to fix bad history and remove Lost Cause propaganda. The North Carolina sign’s 1950s date suggests there might be a longer period of monuments being erected. When I travel to the South I am always surprised to run into these “proud” commemorations of slavery and a white-supremacy military. I am even more surprised that the residents I show them to usually have no idea where exactly they are, why they still are standing or who put them up.

At the very least North Carolina should re-write the sign to be more accurate. Here is my suggestion:

MECKLENBURG IRON WORKS: Established here 1810. Seized by pro-slavery militia 1862 in failed attempt to supply Navy after defeat in Portsmouth, Va. Liberated 1865

That seems fair. The official “essay” of the NC Markers really also should be rewritten.

For example NC Markers wrote:

in time it began to encounter difficulty obtaining and retraining trained workers

Too vague. I would revise that to “Southerners depended heavily on immigrants and Northerners for shipyard labor. As soon as first shots were fired upon the Union by the South, starting a Civil War, many of the skilled laborers left and could not be replaced. Over-mobilization of troops further contributed to huge labor shortages.

NC Markers also wrote:

given its location along the North Carolina Railroad and the South Carolina Railroad, it was connected to several seaboard cities, enabling it to transport necessary products to the Confederate Navy

Weak analysis. I would revise that to “despite creating infrastructure to make use of the Confederate Navy Yard it had no worth without raw materials. Unable to provide enough essential and basic goods, gross miscalculation by Confederate leaders greatly contributed to collapse of plans for a Navy”

But most of all, when they wrote “the war ended” I would revise to say “the Confederates surrendered to the Union, and with their defeat came the end of slavery”.

Let residents be proud of ending the pro-slavery nation, or more specifically returning the Iron Works to something other than fighting for perpetuation of slavery.

So here is the beginning of the story, at its end. Look at this sign on the street in Charlotte, next to Bank of America headquarters:

charlotte-pro-slavery-militia-memorial-sign


(1) 1810 – Iron Industry screenshot from Charlotte – Mecklenburg Library
1810-IronIndustry-Mecklenburg