Our first book detailed the infrastructure risks in cloud environments. It gave basic instructions for how to make it safe to build a cloud.
However, I realized right away that a second book would be necessary as I saw operations going awry. People offering data “services” in cloud environments were doing so unethically.
That’s why since 2013 I’ve been working on tangible, actionable solutions to problems in cloud environments like the impostor CISO, the immoral SRE, and the greedy CEO.
It has been a much harder book to write because The Realities of Securing Big Data crosses many functional lines in an organization from legal to engineering, sales to operations. A long-time coming now, it hopefully will clarify how and why things like this keep happening, as well as what exactly we can do about it:
We recently found that some email addresses and phone numbers provided for account security may have been used unintentionally for advertising purposes. This is no longer happening and we wanted to give you more clarity around the situation: https://help.twitter.com/en/information-and-ads
…and that led to everyone asking an obvious question.
You may remember a very similar incident last year and wonder why nobody at Twitter thought to test their systems to make sure they didn’t have the same security flaws as a safety laggard like Facebook.
Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all.
Facebook and Twitter, after flashy high-profile CISO hires and lots of PR about privacy, both have sunken to terrible reputations. They rank towards the same levels as Wells Fargo in terms of customer confidence.
Facebook has experienced a tumultuous time due to privacy concerns and issues regarding election interference, ranked 94th. Wells Fargo ranked 96th. The Trump Organization ranked 98th, considered a “very poor” reputation.
The Drum says even the advertising industry is calling out Twitter for immorality and incompetence:
Neville Doyle, chief strategy officer at Town Square, suggested it was “enormously improbable” that Twitter ‘inadvertently’ improved its ad product with the sensitive data, and blasted the tech giant for being either “either immoral or incompetent”. Either way, he said, it was playing “fast and loose with users’ privacy”. Respected ad-tech and cybersecurity expert Dr Augustine Fou, who was previously chief digital officer at media agency Omnicom’s healthcare division, also branded Twitter’s announcement as “total chickenshit”. Last July, the Federal Trade Commission (FTC) fined Facebook $5bn for improperly handling user data, the largest fine ever imposed on company for violating consumers’ privacy.
The technology fixes ahead are more straightforward than you might imagine, as well as the management fixes.
In brief, you can trust a cloud provider when you can verify in detail a specific set of data boundaries and controls are in place, with transparency around staffing authorizations and experience related to delivering services. Over the years I’ve led many engineering teams to build exactly this, so I’m speaking from experience of what’s possible. I’ve stood in customer executive meetings to detail how controls work and why the system was designed to mitigate cloud insider threats, including executives at the highest levels.
You should be especially concerned if management lacks an open and public resume of prior steps taken over years to serve the privacy needs of others, let alone management that lacks the ability to deconstruct how their control architecture was built from the start to serve your best interests.
What has been hard, especially through the years of Amazon’s “predator bully” subscription model being worshiped by sales teams, is keeping safety oriented around helping others. Tech cultures in America tend to cultivate “leaders” that think of innovation as separation; having no way to relate to the people they are serving.
The tone now seems to be changing as disclosures are increasing and we’re seeing exposure of the wrong things done by people who wanted to serve others while being unable to relate to them. Hoarding other people’s assets for self-gain in a thinly-veiled spin to be their “service provider” should never have been the meaning of cloud.
Russian Geographical Society used one of its modern landing crafts in a way a mother walrus didn’t appreciate, so most news outlets are describing how she attacked their boat, sinking it and sending the Russian military running for their lives.
Naturally the Russian military made a statement that reported the opposite:
“Serious troubles were avoided thanks to the clear and well co-ordinated actions of the Northern Fleet servicemen, who were able to take the boat away from the animals without harming them.”
Definitely avoided any serious troubles there. “Able to take the boat away” from threats is double-speak for sinking. Aha, you can’t attack Russian boat, because there is no boat. Troubles avoided! Swim faster comrades, it is very cold.
Maybe something was lost in translation when Russians said they thought they were up to the tusk (pun intended).
“The walrus was not injured. We just shoved her off. Our boat was damaged – sections three and five. Barely made it to shore” said Leonid. (Морж не пострадал. Мы его просто отпихнули. А лодка пробита — три секции и пяти. Еле доплыли до берега, сообщает Леонид.)
Speaking of being lost…
The area in question supposedly is on Wilczek Island, (Остров Вильчека) in the southeastern end of Franz Josef Land, Arkhangelsk Oblast, Russia. Maybe it’s somewhere else?
I have yet to find a western map anywhere listing a “Cape Geller” (мысе Геллера). Who was Geller?
Mark Zuckerberg has repeatedly lied to the American people about privacy. I think he ought to be held personally accountable, which is everything from financial fines to—and let me underline this—the possibility of a prison term. Because he hurt a lot of people. And, by the way, there is a precedent for this: In financial services, if the CEO and the executives lie about the financials, they can be held personally accountable.
Often in 2018 I made similar suggestions, based on the thought that our security industry would mature faster if a CSO personally can be held liable like a CEO or CFO (e.g. post-Enron SOX requirements):
And at Blackhat this year I met with Facebook security staff who said during the 2016-2017 timeframe the team internally knew the severity election interference and were shocked when their CSO failed to disclose this to the public.
Maybe the Senator putting it all on the CEO today makes some sense strategically…yet also begs the question of whether an “officer” of security was taking payments enough to afford a $3m house in the hills of Silicon Valley while intentionally withholding data on major security breaches during his watch?
Given an appointment of dedicated officer in charge of security, are we meant to believe he was taking a big salary only to be following orders and not responsible personally? Don’t forget he drew press headlines (without qualification) as an “influential” executive joining Facebook, while at the same time leaving Yahoo because he said he wasn’t influential.
To be fair he posted a statement explaining his decision at the time, and it did say that safety is the industry’s responsibility, or his company’s, not his. Should that have been an early warning he wasn’t planning to own anything that went awry?
I am very happy to announce that I will be joining Facebook as their Chief Security Officer next Monday…it is the responsibility of our industry to build the safest, most trustworthy products possible. This is why I am joining Facebook. There is no company in the world that is better positioned to tackle the challenges…
There also is a weird timing issue. The start to the Russian campaign is when Facebook brings on the new CSO. Maybe there’s nothing to this timing, just coincidence, or maybe Russians knew they were looking at an inexperienced leader. Or maybe they even saw him as “coin-operated” (a term allegedly applied to him by US Intelligence) meaning they knew how easily he would stand down or look away:
July 2017: Facebook officially states “we have seen no evidence that Russian actors bought ads on Facebook”
September 2017: Facebook backtracks and admits it knew (without revealing exactly how soon) Russian actors bought ads on Facebook
September 2017: Zuckerberg muddies their admission by saying “…investigating this for many months, and for a while we had found no evidence of fake accounts linked to Russia running ads”, which focuses on knowledge of fake accounts being used, rather than the more important knowledge Russia was running ad campaigns
October 2017: Facebook’s Policy VP issues a “we take responsibility” statement
October 2017: Facebook admits 80,000 posts from 2015 (i.e. from when Stamos started as CSO) all the way to 2017 (i.e. when Stamos was still CSO) reached over 120 million people. Stamos brands himself both as the influential officer in charge of uncovering harms yet also a wall flower paid an officer salary to not speak out. It does somehow come back to the point that the Russian Internet Research Agency allegedly began operations only after Stamos’ joined. Even if it started before, though, he definitely did not disclose what he knew when he knew it. His behavior echoes a failure to disclose massive breaches while he was attempting his first CSO role in Yahoo! (see step 1 above)
Given the security failures from 2015 to 2017 we have to seriously consider the implications of a sentence that described Stamos’ priors, which somehow are what led him into being a Facebook CSO
At the age of 36, Stamos was the chief technology officer for security firm Artemis before being appointed as Yahoo’s cybersecurity chief in March 2014. In the month of February, Stamos in particular clashed with NSA Director Mike Rogers over decrypting communications, asking whether “backdoors” should be offered to China and Russia if the US had such access.
There are a couple problems with this paragraph, easily seen in hindsight.
First, Artemis wasn’t a security firm in any real sense. It was an “internal startup at NCC Group” and a concept that had no real product and no real customers. As CTO he hired outside contractors to write software that never launched. This doesn’t count as proof of either leadership or technical success, and certainly doesn’t qualify anyone to be an operations leader like CSO of a public company.
Second, nobody in their right mind in technology leadership let alone security would ask if China and Russia are morally equivalent to the United States government when discussing access requests. That signals a very weak grasp of ethics and morality, as well as international relations. I’ve spoken about this many times.
If the U.S. has access it in no way has implied other governments somehow morally are granted the same access. Moreover it was very publicly discussed in 2007 because Yahoo’s CEO was told to not give the Chinese access they requested (when Stamos was 28):
An unusually dramatic congressional hearing on Yahoo Inc.’s role in the imprisonment of at least two dissidents in China exposed the company to withering criticism and underscored the risks for Western companies seeking to expand there. “While technologically and financially you are giants, morally you are pygmies,” Rep. Tom Lantos (D., Calif.)
If anything these two points probably should have disqualified him to become CSO of Facebook, and that’s before we get into his one-year attempt to be CSO at Yahoo! that quickly ended in disaster.
In 2014, Stamos took on the role of chief information security officer at Yahoo, a company with a history of major security blunders. More than one billion Yahoo user accounts were compromised by hackers in 2013, though it took years for Yahoo to publicly report…Some of his biggest fights had to do with disagreements with CEO Marissa Mayer, who refused to provide the funding Stamos needed to create what he considered proper security…
Let me translate. Stamos joined and didn’t do the job disclosing breaches because he was campaigning for more money. He was spending millions (over $2m went into prizes paid to security researchers who reported bugs). While his big-spend bounty-centric program was popular among researchers, it didn’t build trust among customers. This parallels his work as CTO, which didn’t build any customer trust at all.
The kind of statements Stamos made about Artemis launching in the future (never happened) should have been a warning. Clearly he thought taking over a “dot secure” domain name and then renting space to every dot com in the world was a lucrative business model (it wasn’t).
I’m obviously not making this up as you can hear him describe rent-seeking with a straight face. His business model was to use a private commercial entity to collect payments from anyone on the Internet in exchange for a safety flag to hang on a storefront, in a way that didn’t seem to have any fairness authority or logical dispute mechanism.
In late 2010, iSEC was acquired by the British security firm, NCC Group, but otherwise the group continued operating much as before. Then, in 2012, Stamos launched an ambitious internal startup within NCC called Artemis Internet. He wanted to create a sort of gated community within the internet with heightened security standards. He hoped to win permission to use “.secure” as a domain name and then require that everyone using it meet demanding security standards. The advantage for participants would be that their customers would be assured that their company was what it claimed to be—not a spoof site, for instance—and that it would protect their data as well as possible. The project fizzled, though. Artemis was outbid for the .secure domain and, worse, there was little commercial enthusiasm for the project. “People weren’t that interested,” observes Luta Security’s Moussouris, “in paying extra for a domain name registrar who could take them off the internet if they failed a compliance test.”
Imagine SecurityScorecard owning the right to your domain name and disabling you until you pay them to clean up the score they gave you. Dare I mention that a scorecard compliance engine is full of false positives and becomes a quality burden that falls on the companies being scanned? Again, this was his only ever attempt at being a CTO (before he magically branded himself a CSO) and it was an unsuccessful non-starter, a fizzle, a dud.
From that somehow he pivoted into a publicly traded company as an officer of security. Why? How? He abruptly quit Artemis by taking on a CSO role at Yahoo, demanding millions for concept projects more akin to a CTO than CSO. He even made promises upon taking the CSO role to build features that he never delivered. Although I suppose the greater worry still is that he did not disclose breaches.
It was after all that he wanted to be called CSO again, this time at Facebook. That is what Wyden should be investigating. I mean I’m fine with Wyden making a case for the CEO to be held accountable as a starting point, the same way we saw Jeff Skilling of Enron go to jail.
Stamos says he deserves as much blame as anyone else for Facebook being slow to notice and stamp out Russian meddling in the 2016 presidential election
Ironically Stamos, failing to get anywhere with his three attempts at leadership (Artemis, Yahoo and Facebook) has now somehow reinvented himself (again with no prior experience) as an ethics expert. He has also found someone to fund his new project to the tune of millions, which at Blackhat some Facebook staff reported to me was his way to help Facebook avoid regulations by laundering their research as “academic”.
It will be interesting to see if Wyden has anything to say about a CSO being accountable in the same ways a CFO would be, or if focus stays on the CEO.
In any case, after a year of being CSO at Yahoo and three years of being CSO at Facebook, Stamos’ total career amassed only four years as a head of security.
Paul-Olivier Dehaye, a data protection specialist, who spearheaded the investigative efforts into the tech giant, said: “Facebook has denied and denied and denied this. It has misled MPs and congressional investigators and it’s failed in its duties to respect the law.
“It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.”
** The Class-action lawsuit against Yahoo security practices under Stamos provides the following timeline:
2014 Data Breach: In November 2014, malicious actors were able to gain access to Yahoo’s user database and take records of approximately 500 million user accounts worldwide. The records taken included the names, email addresses, telephone numbers, birth dates, passwords, and security questions and answers of Yahoo account holders, and, as a result, the actors may have also gained access to the contents of breached Yahoo accounts, and thus, any private information contained within users’ emails, calendars, and contacts.
Update September 7th, 2019:
In another meeting with ex-Facebook staff I was told when “CEO and CSO are nice people” that should mean they don’t go to jail for crimes, because nice people shouldn’t go to jail.
This perspective has me wondering what the same people would say if I told them Epstein had a lot of friends who said he was nice. I mean their “nice” get out of jail free card suggests to me some kind of context change might help.
I will raise the issue in my CS ethics lectures first using an example outside the tech industry: Should the captain of sunken ship face criminal investigation for saving self as 34 passengers died in an early morning fire? Then I will ask about behavior of the CSO on deck during Yahoo and Facebook breaches.
The tragedy of Boeing’s 737 product security decisions create a sad trifecta for someone interested in aeronautics, lessons from the past, and risk management.
First, there was a sailor’s warning.
We know Boeing moved a jet engine into a position that fundamentally changed handling. This was a result of Airbus ability to add a more efficient engine to their popular A320. The A320 has more ground clearance, so a larger engine didn’t change anything in terms of handling. The 737 sits lower to the ground, so changing to a more efficient engine suddenly became a huge design change.
Here’s how it unfolded. In 2011 Boeing saw a new Airbus design as a direct threat to profitability. A sales-driven rush meant efficiency became a critical feature for their aging 737 design. The Boeing perspective on the kind of race they were in was basically this:
Boeing had to solve for a plane much closer to the ground, while achieving the same marketing feat of Airbus, which said the efficiency didn’t change a thing (thus no costly pilot re-training). This is where Boeing made the critical decision to push their engine design forward and up on the wing,…while claiming that pilots did not need to know anything new about handling characteristics.
Don’t ask me why an Australian TV show didn’t call their segment “Mad Max”.
And that is basically why handling the plane was different, despite Boeing’s claims that their changes weren’t significant, let alone safety-related. The difference in handling was so severe (risk of stall) that Boeing then doubled-down with a clumsy software hack to flight control systems to secretly handle the handling changes (as well as selling airlines an expensive sensor “disagree” light for pilots, which the downed planes hadn’t purchased)
An odd twist to this story is that it was American Airlines who kicked off the Boeing panic about sales with a 2011 order for several hundred new A320. See if you can pick up a more forward and higher engine design in this illustration handed out to passengers.
I added this into the story because note again how Boeing wanted to emphasize “identical” planes yet marketed them heavily as different for even an in-flight magazine given to every passenger. It stands in contrast to how that same airline’s pilots were repeatedly told by Boeing the two planes held no differences in flight worth highlighting in documentation.
To make an even finer point, the Airbus A320 in that same airline magazine doesn’t have a sub-model.
While this engine placement clearly had been approved by highly-specialized engineering management thinking short-term (about racing through FAA compliance), who was thinking about serious instability long-term as a predictable cost?
Anyone who sails, let alone flies airplanes, immediately can see the problem in calling a 737 “Mad Max” the same as a prior 737 design, when flow handling has changed — one doesn’t just push a keel or mast around without direct tiller effects.
Some pilots say unofficially they knew the 737 “Mad Max” was not the same and, at least in America, were mentally preparing themselves for how to react to a defective system. Officially however pilots globally needed to be warned clearly and properly, as well as trained better on the faulty software that would fight with them for safe control of the aircraft.
The B-26 had a high rate of accidents in takeoff and landing until crews were trained better and the aspect ratio modified on its wings/rudder
That doesn’t tell the whole story, though. In terms of history repeating itself, evidence mounted this American airplane was manifestly unsafe to fly and the manufacturer wasn’t inclined to proactively fix and save lives.
Apparently crashes of the Martin B-26 were happening at least every month and sometimes every other day. Yes, crashes were literally happening 15 days out of 30 and the plane wasn’t grounded.
The Martin company in response to concerns started a PR campaign to gloat about how one of its aircraft actually didn’t kill everyone on board and received blessings from Churchill.
Promoting survivorship should be recognized today as a dangerously and infamously bad data tactic. Focusing on economics of Boeing is the right thing here. They haven’t stooped yet to Martin’s survivorship bias campaign, but it does seem that Boeing knowingly was putting lives at risk to win a marketing and sales battle with a rival, similar to what Tesla could be accused of doing.
Third, there are broad societal issues from profitable data integrity flaws.
Can we speak openly yet about the executives making money on big data technology with known integrity flaws that kill customers?
There’s really a strange element to this story from a product management decision flow. Nobody should want to end up where we are at today with this issue.
Boeing knew right away its design change impacted the handling of the product. They then added fixes in, without notifying their customers responsible for operating the product of the severity of a fix failure (crash).
Investigation of development and certification of the Boeing 737 MAX by the FAA and Boeing, by DoJ Fraud Section, with help from the FBI and the DoT Inspector General
Administrative investigation by the DoT Inspector General
DoT Inspector General hearings
FAA review panel on “certification of the automated flight-control system on the Boeing 737 MAX aircraft, as well as its design and how pilots interact with it”
Congressional investigation of “status of the Boeing 737 MAX” for US House Transportation and Infrastructure Committee’s Transportation and Infrastructure Committee
These investigations seem all to be getting at the sort of accountability I’ve been saying needs to happen for Facebook, which also suffered from integrity flaws in its product design. Will a top executive eventually be named? And will there be wider impact to engineering and manufacturing ethics in general? If the Grover Shoe Factory disaster is any indication, the answers should be yes.
In conclusion, if change in design is being deceptively presented, and the suffering of those impacted is minimized (because profits, duh), then we’re approaching a transportation regulatory moment that really is about software engineering. What may emerge is these software-based transportation risks, because fatalities, will bring regulation for software in general.
Even if regulation isn’t coming, the other new reality is buyers (airlines, especially outside the US and beyond the FAA) will do what Truman suggested in 1942: cancel contracts and buy from another supplier who can pass transparency/accountability tests.
The manoeuvres performed by the robot closely resembled those observed in fruit flies. The robot was even able to demonstrate how fruit flies control the turn angle to maximize their escape performance. ’In contrast to animal experiments, we were in full control of what was happening in the robot’s ”brain”.
Can’t help but notice how the researchers emphasize getting away from threats with “high-agility escape manoeuvres” as a primary motivation for their work, which isn’t bananas. In my mind escape performance translates to better wind agility and therefore weather resilience.
The research also mentions the importance of rapidly deflating costs in flying machines. No guess who would really need such an affordable threat-evading flying machine.
Developed by CIA’s Office of Research and Development in the 1970s, this micro Unmanned Aerial Vehicle (UAV) was the first flight of an insect-sized aerial vehicle (Insectothopter). It was an initiative to explore the concept of intelligence collection by miniaturized platforms.
The Insectothopter was plagued by inability to fly in actual weather, as even the slightest breeze would render it useless. In terms of lessons learned, the same problems cropped up with Facebook’s (now cancelled) intelligence collection by elevated platform.
On June 28, 2016, at 0743 standard mountain time, the Facebook Aquila unmanned aircraft, N565AQ, experienced an in-flight structural failure on final approach near Yuma, Arizona. The aircraft was substantially damaged. There were no injuries and no ground damage. The flight was conducted under 14 Code of Federal Regulations Part 91 as a test flight; the aircraft did not hold an FAA certificate of airworthiness.
Instead of getting into the “airworthiness” of fruit flies, I will simply point out that “final approach” is where the winds blow and the damage occurred. If only Facebook had factored in some escape performance maximization to avoid the ground hitting them so dangerously when they landed.
Speaking under oath inside the Naval Base San Diego courtroom, Little said that Losey was so scared of being recorded or followed that when the session wrapped up, the SEAL told the Navy investigator to leave first, so he couldn’t identify the car he drove or trace a path back to his home.
…he retaliated against subordinates during a crusade to find the person who turned him in for minor travel expense violations.
Artificial Intelligence, or even just Machine Learning for those who prefer organic, is influencing nearly all aspects of modern digital life. Whether it be financial, health, education, energy, transit…emphasis on performance gains and cost reduction has driven the delegation of human tasks to non-human agents. Yet who in infosec today can prove agents worthy of trust? Unbridled technology advances, as we have repeatedly learned in history, bring very serious risks of accelerated and expanded humanitarian disasters. The infosec industry has been slow to address social inequalities and conflict that escalates on the technical platforms under their watch; we must stop those who would ply vulnerabilities in big data systems, those who strive for quick political (arguably non-humanitarian) power wins. It is in this context that algorithm security increasingly becomes synonymous with security professionals working to avert, or as necessary helping win, kinetic conflicts instigated by digital exploits. This presentation therefore takes the audience through technical details of defensive concepts in algorithmic warfare based on an illuminating history of international relations. It aims to show how and why to seed security now into big data technology rather than wait to unpoison its fruit.
Facebook has built a reputation for being notoriously insecure, taking payments from attackers with little to no concern for the safety of its users; but a pattern of neglect for information security is not exactly the issue when a finance guy in Sydney, Australia gives a shout-out to a Facebook user for what he calls an “amazing shot” in history:
As anyone hopefully can see, this is a fake image. Here are some immediate clues:
Clarity. What photographic device in this timeframe would have such an aperture let alone resolution?
Realism. The rocket exhaust, markings, ground detail…all too “clean” to be real. That exhaust in particular is an eyesore
Positioning. Spitfire velocity and turbulence relative to V1 is questionable, so this overlapped steady formation is unlikely
Vantage point. Given positioning issue, photographer close position aft of Spitfire even less likely
That’s only a quick list to make a solid point this is a fabrication anyone should be able to discount at first glance. In short, when I see someone say they found an amazing story or image on Facebook there’s a very high chance it’s toxic content meant to deceive and harm, much in the same way tabloid stands in grocery stores used to operate. Entertainment and attacks should be treated as such, not as realism or useful reporting.
Now let’s dig a little deeper.
In 2013 an “IAF Veteran” posted a shot of a Spitfire tipping a V1. This passes many of the obvious tests above. Unfortunately he also inserts some nonsense in the text about the dangers of firing bullets and reliably blowing up a V1 in air, far away from civilians, versus sending it unpredictably to ground. Ignore that patently false analysis (shooting remained the default) and revel instead in period photographic image quality:
Shame this artist’s tweet with the image wasn’t given proper and full credit by the Sydney finance guy, as it would have made far more sense to have a link to the artist talking about their “new work” or even their gallery and exact release dates:
Who indeed? The artist answers their own question in their own next tweet, writing
First to physically tip a V1 bomb was Ken Collier, 91 Squadron, in a Spitfire MkIVX. He scored 7 V1 victories and was later KIA. #WWII #WW2.
On the bright side the artist answers their own question with some real history, worth researching further. On the dark side the artist’s answer also sadly omits any link to original source or reference material, let alone the (attempted) realism found above in that “IAF veteran” tweet with an actual photograph.
The artist simply says it is based on a real event, and leaves out the actual photograph (perhaps to avoid acknowledging the blurry inspiration to their art) while including a high-resolution portrait photo of the pilot who achieved it. Kind of misleading to have that high-resolution photograph of Ken Collier sitting on the ground, instead of the one the IAF Veteran tweeted of a Spitfire in flight.
The more complete details of this story not only are worth telling, they put the artist’s high-resolution fantasy reconstruction of a grainy blotchy image into proper context. Fortunately “V1 Flying Bomb Aces by Andrew Thomas” is also online and tells us through first-person accounts of a squadron diary what really happened (notice both original photographs together in this book, the plane and the pilot).
Normally a V1 would be shot down, as you can see in this Popular Mechanics article describing hundreds destroyed in 1944 by 22mm cannons on the Tempest.
It not only described debris after explosions was a known risk to be avoided, leading to some gun modifications for longer-ranges, it also characterizes tipping as unusual and low frequency like at the end of a run (gun jammed).
In the case of the artist rendering that started this blog post, a Spitfire pilot found himself firing until out of ammo. He became frustrated without ammo so decided to tip a wing of the V1. Shooting the V1 was preferred, as it would explode in air and kill far fewer than being tipped to explode on ground. Only because he ran out of bullets, and in a frustrated innovative state, did he decide to tip…of course later there would be others, but the total tipped this way was in the dozens out of the many thousands destroyed.
Does finance guy in Sydney feel accountable for claiming a real event in an artist’s fantasy image? Of course not, because he has been responding to people that he thinks it still is a fine representation of a likely event and he doesn’t measure any harm from confusion caused; he believes harm he has done still doesn’t justify him making a correction.
Was he wrong to misrepresent and should he delete his “amazing shot” tweet and replace with one that says amazing artwork or new rendering? Yes, it would be the sensible thing if he cares about history and accuracy, but the real question is centered around the economics of why he won’t change. Despite being repeatedly made aware that he has become a source of misinformation, the cost of losing “likes” probably weighs heavier on him than the cost of damaging integrity.
Next time you bang on a vending machine for a bottle that refuses to fall into your hands, ask yourself if restaurants soon will have only robots serving you meals.
Maybe it’s true there is no future for humans in service industries. Go ahead, list them all in your head. Maybe problems robots have with simple tasks like dropping a drink into your hands are the rare exceptions and the few successes will become the norm instead.
One can see why it’s tempting to warn humans not to plan on expertise in “simple” tasks like serving meals or tending a bar…take the smallest machine successes and extrapolate into great future theories of massive gains and no execution flaws or economics gone awry.
Just look at cleaning, sewing and cooking for examples of what will be, how entire fields have been completely automated with humans eliminated…oops, scratch that, I am receiving word from my urban neighbors they all seem to still have humans involved and providing some degree of advanced differentiation.
Maybe we should instead look at darling new startup Blue Apron, turning its back on automation, as it lures millions in investments to hire thousands of humans to generate food boxes. This is such a strange concept of progress and modernity to anyone familiar with TV dinners of the 1960s and the reasons they petered out.
Just me or is anyone else suddenly nostalgic for that idyllic future of food automation (everything containerized, nothing blended) as suggested in a 1968 movie called “2001”…we’re 16 years late now and I still get no straw for my fish container?
I don’t even know what that box on the top right is supposed to represent. Maybe 2001 predicted chia seed health drinks.
Speaking of cleaning, sewing and cooking with robots…someone must ask at some point why much of automation has focused on archetypal roles for women in American culture. Could driverless tech be targeting the “soccer-mom” concept along similar lines; could it arguably “liberate” women from a service desired from patriarchal roles?
Hold that thought, because instead right now I hear more discussion about a threat from robots replacing men in the over-romanticized male-dominated group of long-haul truckers. (Protip: women are now fast joining this industry)
Whether measuring accidents, inspections or compliance issues, women drivers are outperforming males, according to Werner Enterprises Inc. Chief Operating Officer Derek Leathers. He expects women to make up about 10 percent of the freight hauler’s 9,000 drivers by year’s end. That’s almost twice the national average.
The question is whether American daily drivers, of which many are professionals in trucks, face machines making them completely redundant just like vending machines eliminating bartenders.
It is very, very tempting to peer inside any industry and make overarching forecasts of how jobs simply could be lost to robots. Driving a truck on the open roads, between straight lines, sounds so robotic already to those who don’t sit in the driver’s seat. Why has this not already been automated, is the question we should be answering rather than how soon will it happen.
Only at face value does driving present a bar so low (pun not intended) machines easily could take it over today. Otto of the 1980 movie “Airplane” fame comes to mind for everyone I’m sure, sitting ready to be, um, “inflated” and take over any truck anywhere to deliver delicious TV dinners.
Yet when scratching at barriers, maybe we find trucking is more complicated than this. Maybe there could be more to human processes, something really intelligent, than meets a non-industry specific robotic advocate’s eye?
Systems that have to learn, true robots of the future, need to understand a totality of environment they will operate within. And this begs the question of “knowledge” about all tasks being replaced, not simply the ones we know of from watching Hollywood interpretations of the job. A common mistake is to underestimate knowledge and predict its replacement with an incomplete checklist of tasks believed to point in the general direction of success.
Once the environmental underestimation mistake is made another mistake is to forecast cost improvements by acceleration of checklists towards a goal of immediate decision capabilities. We have seen this with bank ATMs, which actually cost a lot of money to build and maintain and never achieved replacement of teller decision-trees; even more security risks and fraud were introduced that required humans to develop checklists and perform menial tasks to maintain ATMs, which still haven’t achieved full capability. This arguably means new role creation is the outcome we should expect, mixed with modest or even slow decline of jobs (less than 10% over 10 years).
Automation struggles at eliminating humans completely because of the above two problems (need for common sense and foundations, need for immediate decision capabilities based on those foundations) and that’s before we even get to the need for memory and a need for feedback loops and strategic thinking. The latter two are essential for robots replacing human drivers. Translation to automation brings out nuances in knowledge that humans excel in as well as long-term thoughts both forwards and backwards.
Machines are supposed to move beyond limited data sets and be able to increase minimum viable objectives above human performance, yet this presupposes success at understanding context. Complex streets and dangerous traffic situations are a very high bar to achieve, so high they may never be reached without human principled oversight (e.g. ethics). Without deep knowledge of trucking in its most delicate moments the reality of driver replacement becomes augmentation at best. Unless the “driver” definition changes, goal posts are moved and expectations for machines are measured far below full human capability and environmental possibility, we remain a long way from replacement.
Take for example the amount of time it takes to figure out risk of killing someone in an urban street full of construction, school and loading zones. A human is not operating within a window 10 seconds from impact because they typically aim to identify risks far earlier, avoiding catastrophes born from leaving decisions to last-seconds.
The Inquest Jury found a verdict of man- slaughter against the driver,—a boy under fifteen years of age, and who appeared to have erred more from incapacity than evil design; and gave a deodand of 50/. against the horse and cabriolet, to mark their sense of the gross impropriety of the owner in having in- trusted the vehicle to so young and inexperienced a person.
Young and inexperienced is exactly what even the best “learning” machines are today. Sadly for most of 19th Century London authorities showed remarkably little interest in shared ride driving ability. Tests to protect the public from weak, incapacitated or illogical drivers of “public carriages” started only around 1896.
Finding balance between insider expertise based on experience and outsider novice learner views is the dialogue playing out behind the latest NHTSA automation scales meant to help regulate safety on our roads. People already are asking whether costs to develop systems that can go higher than “level three” (cede control under certain conditions and environments) autonomous vehicle are justified. That third level of automation is what typically is argued by outsiders to be the end of the road for truck drivers (as well as soccer moms).
The easy answer to the third level is no, it still appears to be years before we can SAFELY move above level three and remove humans in common environments (not least of all because hit-and-run murder economics heavily favoring driverless fleets). Cost reductions today through automation make far more sense at the lower ends of the scale where human driver augmentation brings sizable returns and far fewer chances of disaster or backlash. Real cost, human life error, escalates quickly when we push into a full range of even the basic skills necessary to be a safe driver in every environment or any street.
There also is a more complicated answer. By 2013 we saw Canadian trucks linking up in Alberta’s open road and using simple caravan techniques. Repeating methods known for thousands of years, driver fatigue and energy costs were significantly dropped though caravan theory. Like a camel watching the tail of one in front through a sandstorm…. In very limited private environments (e.g. competitions, ranches, mines, amusement parks) the cost of automation is less and the benefits realized early.
I say the answer is complicated because level three autonomous vehicle still must have a human at the controls to take over, and I mean always. The NHTSA has not yet provided any real guidance on what that means in reality. How quickly a human must take-over leaves a giant loophole in defining human presence. Could the driver be sleeping at the controls, watching a movie, or even reposing in the back-seat?
The Interstate system in America has some very long-haul segments with traffic flowing at similar speed with infrequent risk of sudden stop or obstruction. Tesla, in their typically dismissive-of-safety fashion despite (or maybe because of) their cars repeatedly failing and crashing, called major obstructions on highways a “UFO” frequency event.
Cruise control and lane-assist in pre-approved and externally monitored safe-zones in theory could allow drivers to sleep as they operate, significantly reducing travel times. This is a car automation model actually proposed in the 1950s by GM and RCA, predicted to replace drivers by 1974. What would the safe-zone look like? Perhaps one human taking over the responsibility by using technology to link others, like a service or delegation of decision authority, similar to air traffic control (ATC) for planes. Tesla is doing this privately, for those in the know.
Ideally if we care about freedom and privacy, let alone ethics, what we should be talking about for our future is a driver and a co-pilot taking seats in the front truck of a large truck caravan. Instead of six drivers for six trucks, for example, you could find two drivers “at the controls” for six trucks connected by automation technology. This is powerful augmentation for huge cost savings, without losing essential control of nuanced/expert decisions in myriad local environments.
This has three major benefits. First, it helps with the shortage of needed drivers, mentioned above being filled by women. Second it allows robot proponents to gather real-world data with safe open road operations. Third, it opens the possibility of job expansion and transitions for truckers to drone operations.
On the other end of the spectrum from boring unobstructed open roads, in terms of driverless risks, are the suburban and urban hubs (warehouses and loading docks) that manage complicated truck transactions. Real human brain power still is needed for picking up and delivering the final miles unless we re-architect the supply-chain. In a two-driver, six-truck scenario this means after arriving at a hub, trucks return to one driver one truck relationship, like airplanes reaching an airport. Those trucks lacking human drivers at the controls would sit idle in queue or…wait for it…be “remotely” controlled by the locally present human driver. The volume of trucks (read percentage “drones”) would increase significantly as number of drivers needed might actually decline only slightly.
Other situations still requiring human control tend to be bad weather or roads lacking clear lines and markings. Again this would simply mean humans at the controls of a lead vehicle in a caravan. Look at boats or planes again for comparison. Both have had autopilots far longer, at least for decades, and human oversight has yet to be cost-effectively eliminated.
Could autopilot be improved to avoid scenarios that lead to disaster, killing their human passengers? Absolutely. Will someone pay for autopilots to avoid any such scenarios? Hard to predict. For that question it seems planes are where we have the most data to review because we treat their failures (likely due to concentrated loss of life) with such care and concern.
There’s an old saw about Allied bombers of WWII being riddled with bullet holes yet still making it back to base. After much study the Air Force put together a presentation and told a crowded room that armor would be added to all the planes where concentrations of holes were found. A voice in back of the crowd was heard asking “but shouldn’t you put the armor where the holes aren’t? Where are the holes on planes that didn’t come back”.
It is time to focus our investments on collecting and understanding failures to improve driving algorithms of humans, by enhancing the role of drivers. The truck driver already sits on a massively complex array of automation (engines and networks) so adding more doesn’t equate to removing the human completely. Humans still are better at complex situations such as power loss or reversion to manual controls during failures. Automation can make both flat open straight lines into the sunset more enjoyable, as well as the blizzard and frozen surface, but only given no surprises.
Really we need to be talking about enhancing drivers, hauling more over longer distance with fewer interruptions. Beyond reduced fatigue and increased alertness with less strain, until systems move above level three automation the best-case use of automation is still augmentation.
Drivers could use machines for making ethical improvements to their complex logistics of delivery (less emissions, increased fuel efficiency, reduced strain on the environment). If we eliminate drivers in our haste to replace them, we could see fewer benefits and achieve only the lowest-forms of automation, the ones outsiders would be pleased with while those who know better roll their eyes with disappointment.
Or maybe Joe West & the Sinners put it best in their classic trucker tune “$2000 Navajo Rug”
I’ve got my own chakra machine, darlin’,
made out of oil and steel.
And it gives me good karma,
when I’m there behind the wheel
This presentation sifts through the carnage of history and offers an unvarnished look at some spectacular past machine learning failures to help predict what catastrophes may lay ahead, if we don’t step in. You’ve probably heard about a Tesla autopilot that killed a man…
Humans are great at failing. We fail all the time. Some might even say intelligence is so hard won and infrequent let’s dump as much data as possible into our “machines” and have them fail even faster on our behalf at lower cost or to free us. What possibly could go wrong?
Looking at past examples, learning from failures, is meant to ensure we avoid their repetition. Yet it turns out when we focus our machines narrowly, and ignore safety decision controls or similar values, we simply repeat avoidable disasters instead of achieving faster innovations. They say hindsight is 20-20 but you have to wonder if even our best machines need corrective lenses. At the end of the presentation you may find yourself thinking how easily we could have saved a Tesla owner’s life.