Category Archives: Sailing

US Army Considers Grey Hats for PSYOP Warriors

Leaflets are so basic, so black beret, it sounds like something higher up on the hat color chart may be coming to attract talent into Psychological Operations (PSYOP) as they modernize.

Nothing is decided yet, there’s still a chance someone could influence the decision, but rumors have it the psychological warfare troops will be represented by wearing a beret in color of white noise:

The idea is essentially still being floated at this point, but it could be a recruiting boon for the PSYOP career field, which is tasked with influencing the emotions and behaviors of people through products like leaflets, loudspeakers and, increasingly, social media.

“In a move to more closely link Army Special Operations Forces, the PSYOP Proponent at the U.S. Army John F. Kennedy Special Warfare Center and School is exploring the idea of a distinctive uniform item, like a grey beret, to those Soldiers who graduate the Psychological Operations Qualification Course,” Lt. Col. Loren Bymer, a USASOC spokesman, said in an emailed statement to Army Times.

Still seems a little fuzzy on the details, yet reporters also dropped some useful knowledge bombs in their story:

1) The new Army Special Operations Command strategy released just a month ago states everyone always will be trained in cyber warfare and weaponizing information

LOE 2 Readiness, OBJ 2.2 Preparation: Reality in readiness will be achieved using cyber and information warfare in all aspects of training.

2) Weaponizing information means returning to the influence operations in World War II, let alone World War I…I mean adapting to the modern cloud platform (Cambridge Analytica) war

“We need to move beyond our 20th century approach to messaging and start looking at influence as an integral aspect of modern irregular warfare,” Andrew Knaggs, the Pentagon’s deputy assistant secretary of defense for special operations and combating terrorism, said at a defense industry symposium in February. Army Special Operations Command appears to take seriously the role that influencing plays in great power competition.

Speaking of cloudy information and influence, an Army site describes how the Air Force in 2008 setup a data analysis function and referred to them as Grey Berets, or Special Operations Weather Team (SWOT):

As some of the most highly trained military personnel, the “grey beret” are a force to be reckoned with. Until SOWT gives the “all-clear” the mission doesn’t move forward.

The Air Force even offers hi-res photos of a grey beret as proof they are real.

Kessler AFB: “Team members collect atmospheric data, assist mission planning, generate accurate and mission-tailored target and route forecasts in support of global special operations, conduct special weather reconnaissance and train foreign national forces.” Click for original.

Meanwhile over at the Navy and Marines there’s much discussion about vulnerability to broad-based information attacks across their entire supply chain.

This might be a good time to remember October 12, 1961 (only nine months after taking office as the President), the day JFK visited Fort Bragg’s Special Warfare Center.

While Brigadier General (BG) William P. Yarborough, commander of the U.S. Army Special Warfare Center, waited at the pond, the presidential caravan drove down roads flanked on both sides by saluting SF soldiers, standing proudly in fatigues and wearing green berets.

“Late Thursday morning, 12 October 1961, BG Yarborough welcomed the 35th President, Secretary McNamara, GEN Decker, and the distinguished guests at the reviewing stand.”

General Yarborough very strategically wore the green beret as he greeted JFK and they spoke of Special Forces wanting them a long time (arguably since 1953 when ex-OSS Major Brucker started the idea).

A few days after the visit JFK famously wrote poetically to the General:

The challenge of this old but new form of operations is a real one…I am sure the Green Beret will be a mark of distinction in the trying times ahead.

Just one month later the green beret became official headgear of the Special Forces.

Russian “Seabed Warfare” Ship Sails Near U.S. Cables

Recently I wrote about developments in airborne information warfare machines.

Also in the news lately is an infamous Russian “seabed warfare” ship that suddenly appeared in Caribbean waters.

Original artwork from Covert Shores, by H I Sutton. Click on image for more ship details.

She can deploy deep-diving submarines and has two different remote-operated vehicle (ROV) systems. And they can reach almost any undersea cable on the planet, even in deep water where conventional wisdom says that a cable should be safe.

In the same news story, the author speculates that ship is engaged right now in undersea cable attacks.

…search patterns are different from when she is near Internet cables. So we can infer that she us doing something different, and using different systems.

So has she been searching for something on this trip? The journey from her base in the Arctic to the Caribbean is approximately 5,800 miles. With her cruising speed of 14.5 knots it should have taken her about two weeks. Instead it has taken her over a month. So it does appear likely.

The MarineTraffic map shows the ship near the coast of Trinidad.

MarineTraffic map of Yantar

Maps of the Caribbean waters illustrate the relevance of any ship’s position to Internet cables and seabed warfare.

TeleGeography Submarine Cable Map 2019

A Russian ship on the northwest coast of Trinidad means it’s either inspecting or even tapping into the new DeepBlue cable, listed as going online 2020. Trinidad is in the lower right corner of the above map. Here’s a zoomed in look at the area to compare with the ship position map above:

And the DeepBlue cable specs give a pretty good idea of why a Russian seabed warfare ship would be hovering about in those specific waters…

Spanning approximately 12,000 km and initially landing in 14 markets, the Deep Blue Cable will meet an urgent demand for advanced telecom services across the Caribbean. This resilient state-of-the-art cable has up to 8 fibre pairs with an initial capacity of 6Tbps and ultimate capacity of approximately 20Tbps per fibre pair. It is designed to be fully looped maximizing system resiliency. With more than 40 planned landings, Deep Blue Cable will bring 28 island nations closer to each other and better connected to the world.

In only somewhat related news, the U.S. has been funding a scientific mission with the latest undersea discovery robots to find missing WWII submarines.

The USS Grayback was discovered more than 1,400 feet under water about 50 miles south of Okinawa, Japan, in June by Tim Taylor and his “Lost 52 Project” team, which announced the finding Sunday.

Their announcements are public and thus show how clearly technology today can map the seabed.

Announcing the discovery of the USS Grayback on June 5th, 2019 by Tim Taylor and his “Lost 52 Project” team.

Don’t Be an AppleCard: Exposed for Using Sexist Algorithm

Wrecked ship Captain de Kam said “It’s just like losibng a beautiful woman”.
Photograph: Michael Prior

The creator of Ruby on Rails tweeted angrily at Apple November 7th that they were discriminating unfairly against his wife, and he wasn’t able to get a response:

By the next day, he had a response and he was even more unhappy. “THE ALGORITHM”, described similarly to Kafka’s 1915 novel “The Trial“, became the focus of his complaint:

She spoke to two Apple reps. Both very nice, courteous people representing an utterly broken and reprehensible system. The first person was like “I don’t know why, but I swear we’re not discriminating, IT’S JUST THE ALGORITHM”. I shit you not. “IT’S JUST THE ALGORITHM!”. […] So nobody understands THE ALGORITHM. Nobody has the power to examine or check THE ALGORITHM. Yet everyone we’ve talked to from both Apple and GS are SO SURE that THE ALGORITHM isn’t biased and discriminating in any way. That’s some grade-A management of cognitive dissonance.

And the following day he appeals to regulators for a transparency regulation:

It should be the law that credit assessments produce an accessible dossier detailing the inputs into the algorithm, provide a fair chance to correct faulty inputs, and explain plainly why difference apply. We need transparency and fairness. What do you think @ewarren?

Transparency is a reasonable request. Another reasonable request in the thread was evidence of diversity within the team that developed the AppleCard product. These solutions are neither hard nor hidden.

What algorithms are doing, time and again, is accelerating and spreading historic wrongs. The question fast is becoming whether centuries of social debt in forms of discrimination against women and minorities is what technology companies are prepared for when “THE ALGORITHM” exposes the political science of inequality and links it to them.

Woz, founder of Apple, correctly states that only the government can correct these imbalances. Companies are too powerful for any individual to keep the market functioning to any degree of fairness.

Take the German government’s “Datenethikkommission” report on regulating AI, for example, as it was just released.

And the women named in the original tweet also correctly states that her privileged status, achieving a correction for her own account, is no guarantee of a social system of fairness for anyone else.

I care about justice for all. It’s why, when the AppleCard manager told me she was aware of David’s tweets and that my credit limit would be raised to meet his, without any real explanation, I felt the weight and guilt of my ridiculous privilege. So many women (and men) have responded to David’s twitter thread with their own stories of credit injustices. This is not merely a story about sexism and credit algorithm blackboxes, but about how rich people nearly always get their way. Justice for another rich white woman is not justice at all.

Again these are not revolutionary concepts. We’re seeing the impact from a disconnect between history, social science of resource management, and the application of technology. Fixing technology means applying social science theory in the context of history. Transparency and diversity work only when applied in that manner.

In my recent presentation to auditors at the annual ISACA-SF conference, I conclude with a list and several examples of how AI auditing will perform most effectively.

One of the problems we’re going to run into with auditing Apple products for transparency will be (from denying our right-to-repair hardware to forcing “store” bought software) they have been long waging a war against any transparency in technology.

Apple’s subtle, anti-competitive practices don’t look terrible in isolation, but together they form a clear strategy.

The closed-minded Apple model of business is also dangerous as it directly inspires others to repeat the mistakes.

Honeywell, for example, now speaks of “taking over your building’s brains” by emulating how Apple shuts down freedom:

A good analogy I give to our customers is, what we used to do [with industrial technology] was like a Nokia phone. It was a phone. Supposed to talk. Or you can do text. That’s all our systems are. They’re supposed to do energy management. They do it. They’re supposed to protect against fire. They do it. Right? Now our systems are more like Apple. It’s a platform. You can load any app. It works. But you can also talk, and you can also text. But you can also listen to the music. Possibilities emerge based upon what you want.

That closing concept of possibilities can be a very dangerous prospect if “what you want” comes from a privileged position of power with no accountability. In other words do you want to live in a building run by a criminal brain?

When an African American showed up to rent an apartment owned by a young real-estate scion named Donald Trump and his family, the building superintendent did what he claimed he’d been told to do. He allegedly attached a separate sheet of paper to the application, marked with the letter “C.” “C” for “Colored.” According to the Department of Justice, that was the crude code that ensured the rental would be denied.

Somehow THE ALGORITHM in that case ended up in the White House. And let us not forget that building was given such a peculiar name by Americans trying to appease white supremacists and stop blacks from entering even as guests of the President.

…Mississippi senator suggesting that after the dinner [allowing a black man to attend] the Executive Mansion was “so saturated with the odour of the nigger that the rats have taken refuge in the stable”. […] Roosevelt’s staff went into damage control, first denying the dinner had taken place and later pretending it was actually a quick bite over lunch, at which no women were in attendance.

A recent commentary about fixing closed minds, closed markets, and bias within in the technology industry perhaps explained it best:

The burden to fix this is upon white people in the tech industry. It is incumbent on the white women in the “women in tech” movement to course correct, because people who occupy less than 1% of executive positions cannot be expected to change the direction of the ship. The white women involved need to recognize when their narrative is the dominant voice and dismantle it. It is incumbent on white women to recognize when they have a seat at the table (even if they are the only woman at the table) and use it to make change. And we need to stop praising one another—and of course, white men—for taking small steps towards a journey of “wokeness” and instead push one another to do more.

Those sailing the ship need to course correct it. We shouldn’t expect people outside the cockpit to drive necessary changes. The exception is when talking about the governance group that licenses ship captains and thus holds them accountable for acting like an AppleCard.

Africa Foreshadowed U.S. Abandonment of Allies in Syria: Opening Doors for Russian and Chinese Military Expansions

During Southern Accord 2012 U.S. Army Africa, and other U.S. military forces foster security cooperation while conducting combined, joint humanitarian assistance, peacekeeping operations and aeromedical evacuation exercises. (U.S. Army Africa photo by Sgt. Adam Fischman)

The latest analysis of the Syria crisis increasingly reveals it is a Russian plan that the White House has swallowed hook, line and sinker. Both Russia and China stand poised to expand into areas formerly allied with America, to expand their own operations that will erode American relations and influence.

Unilateral withdrawal clearly harms U.S. interests both short (UN Security Council now comparing it to Bosnia, with regional destabilization) and long (high bar to gain foothold or respect for re-entry) terms, yet America somehow allows Executive-branch folly to proceed.

Perhaps you recall just a few months ago a similar withdrawal story was brewing in Africa? That probably should have been reported as a much starker warning of what was to come.

Gen Waldhauser said the troops will be deployed to missions where the US sees as high-priority.

“We all realise, you know, Africa, with regards to the prioritisation of our national interests … there’s no doubt about the fact that that it’s, you know, it’s not number one on the list,” Gen Waldhauser was quoted as saying.

The Trump administration views preparation for potential conflicts with China or Russia to be of higher priority than combating terrorism in Africa.

Now with the White House flying a white flag in abandoning its Kurdish allies in Syria, inviting Russia to roll right in afterwards, there might be a clearer explanation for abandonment of African forces.

The Kremlin’s goal is to emulate China’s success in fostering economic, diplomatic, and military links with Africa. To become an important partner, Moscow is organizing the first-ever Russia-Africa summit on 23-24 October.

The American pull-out from Africa serves the opposite of preparation elsewhere for potential conflicts with China or Russia.

Consider that turning tail and intentionally opening doors to Russian military sales expansion has been manifested by a brand new announcement that Russia is abruptly now pushing into new African allegiances:

While Moscow is focused primarily on other regions, it regards Africa as an attractive venue to evade international sanctions imposed by Western nations and deepen ties with old and new partners while scoring points at the expense of the United States.

Part of Russia’s engagement in Africa is military in nature. The Russian military and Russian private military contractors linked to the Kremlin have expanded their global military footprint in Africa, seeking basing rights in a half dozen countries and inking military cooperation agreements with 27 African governments

America claiming to be redirecting its military towards confrontation with Russia is double-talk. It’s pulling its hands off the wheel, literally opening the door and handing keys to arms dealers to drive. This will mean a spread of anti-humanitarian influences and locking the U.S. out of “forward” stations for military and civilian operations, which will greatly increase risk of harm to the United States (along with any democratic nations and states).

What is especially baffling is how China and Russia are doing basically the same expansionist plan, threatening American influence and ability to protect values, yet get such different treatment by the White House.

Replace the word China with Russia in this next story and you should see the problem with the U.S. unilateral withdrawal from Syria as well as Africa:

“There are two concerns about these investments,” said Ohio Rep. Bob Gibbs, the top Republican on the Subcommittee for Coast Guard and Maritime Transportation. “First, the dual commercial and military uses of these assets; second, that the debt incurred by these countries will tie them to China in ways that will facilitate China’s international pursuits and potentially inhibit U.S. overseas operations.”

We’ve seen this already as China uses its offer of loans to later squeeze control of ports

Kenyan government risks losing the lucrative Mombasa port to China should the country fail to repay huge loans advanced by Chinese lenders. In November, African Stand reported on how Kenya is at high risk of Losing strategic assets over huge Chinese debt and just after a few month the Chinese are about to take action.

Bottom line is that pulling back to confront Russia and China is counterproductive. Advance deployments and influence is what was designed to prevent a lopsided confrontation, by forming global alliances that maintain what Eisenhower wisely referred to as the American need for a confederation of mutual trust and respect.

Losing alliances also means American warfare technology (which depends increasingly on intelligence) becomes less reliable in the very near future. Perhaps I’m stating the obvious but things like “Simple map displays require 96 hours to synchronize a brigade or division targeting cycle…” will get performance gains faster/better through augmenting human alliance networks in the field rather than pulling out and relying on AI alone.


Update October 24: LSE’s Stephen Paduano and alum John McDermott write in The Economist that the rise of Russian activity in Africa has been accompanied by senseless violence.

When three Russian journalists tried to investigate their country’s shady operations in the Central African Republic they turned up dead in July 2018

When Can You Trust Cloud Providers?

The Raft of the Medusa by Géricault depicts service provider incompetence of 1816: “Crazed, parched and starved, they slaughtered mutineers, ate their dead companions and killed the weakest”

Our first book detailed the infrastructure risks in cloud environments. It gave basic instructions for how to make it safe to build a cloud.

However, I realized right away that a second book would be necessary as I saw operations going awry. People offering data “services” in cloud environments were doing so unethically.

That’s why since 2013 I’ve been working on tangible, actionable solutions to problems in cloud environments like the impostor CISO, the immoral SRE, and the greedy CEO.

It has been a much harder book to write because The Realities of Securing Big Data crosses many functional lines in an organization from legal to engineering, sales to operations. A long-time coming now, it hopefully will clarify how and why things like this keep happening, as well as what exactly we can do about it:

We recently found that some email addresses and phone numbers provided for account security may have been used unintentionally for advertising purposes. This is no longer happening and we wanted to give you more clarity around the situation: https://help.twitter.com/en/information-and-ads

…and that led to everyone asking an obvious question.

You may remember a very similar incident last year and wonder why nobody at Twitter thought to test their systems to make sure they didn’t have the same security flaws as a safety laggard like Facebook.

Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all.

Facebook and Twitter, after flashy high-profile CISO hires and lots of PR about privacy, both have sunken to terrible reputations. They rank towards the same levels as Wells Fargo in terms of customer confidence.

Facebook has experienced a tumultuous time due to privacy concerns and issues regarding election interference, ranked 94th. Wells Fargo ranked 96th. The Trump Organization ranked 98th, considered a “very poor” reputation.

The Drum says even the advertising industry is calling out Twitter for immorality and incompetence:

Neville Doyle, chief strategy officer at Town Square, suggested it was “enormously improbable” that Twitter ‘inadvertently’ improved its ad product with the sensitive data, and blasted the tech giant for being either “either immoral or incompetent”. Either way, he said, it was playing “fast and loose with users’ privacy”. Respected ad-tech and cybersecurity expert Dr Augustine Fou, who was previously chief digital officer at media agency Omnicom’s healthcare division, also branded Twitter’s announcement as “total chickenshit”. Last July, the Federal Trade Commission (FTC) fined Facebook $5bn for improperly handling user data, the largest fine ever imposed on company for violating consumers’ privacy.

The technology fixes ahead are more straightforward than you might imagine, as well as the management fixes.

In brief, you can trust a cloud provider when you can verify in detail a specific set of data boundaries and controls are in place, with transparency around staffing authorizations and experience related to delivering services. Over the years I’ve led many engineering teams to build exactly this, so I’m speaking from experience of what’s possible. I’ve stood in customer executive meetings to detail how controls work and why the system was designed to mitigate cloud insider threats, including executives at the highest levels.

You should be especially concerned if management lacks an open and public resume of prior steps taken over years to serve the privacy needs of others, let alone management that lacks the ability to deconstruct how their control architecture was built from the start to serve your best interests.

What has been hard, especially through the years of Amazon’s “predator bully” subscription model being worshiped by sales teams, is keeping safety oriented around helping others. Tech cultures in America tend to cultivate “leaders” that think of innovation as separation; having no way to relate to the people they are serving.

The tone now seems to be changing as disclosures are increasing and we’re seeing exposure of the wrong things done by people who wanted to serve others while being unable to relate to them. Hoarding other people’s assets for self-gain in a thinly-veiled spin to be their “service provider” should never have been the meaning of cloud.

Russian Military Downplays Defeat by Female Walrus

Russian Geographical Society used one of its modern landing crafts in a way a mother walrus didn’t appreciate, so most news outlets are describing how she attacked their boat, sinking it and sending the Russian military running for their lives.

Naturally the Russian military made a statement that reported the opposite:

“Serious troubles were avoided thanks to the clear and well co-ordinated actions of the Northern Fleet servicemen, who were able to take the boat away from the animals without harming them.”

Definitely avoided any serious troubles there. “Able to take the boat away” from threats is double-speak for sinking. Aha, you can’t attack Russian boat, because there is no boat. Troubles avoided! Swim faster comrades, it is very cold.

Maybe something was lost in translation when Russians said they thought they were up to the tusk (pun intended).

A first-person account in Russian media said their boat was done in and a video shows them trying to poke the walrus with a gaff, which probably just made her more angry.

“The walrus was not injured. We just shoved her off. Our boat was damaged – sections three and five. Barely made it to shore” said Leonid. (Морж не пострадал. Мы его просто отпихнули. А лодка пробита — три секции и пяти. Еле доплыли до берега, сообщает Леонид.)

Speaking of being lost…

The area in question supposedly is on Wilczek Island, (Остров Вильчека) in the southeastern end of Franz Josef Land, Arkhangelsk Oblast, Russia. Maybe it’s somewhere else?

I have yet to find a western map anywhere listing a “Cape Geller” (мысе Геллера). Who was Geller?

US Senator Argues for Jailing Facebook Execs

This title comes from a recent interview with Oregon’s Senator Wyden

Mark Zuckerberg has repeatedly lied to the American people about privacy. I think he ought to be held personally accountable, which is everything from financial fines to—and let me underline this—the possibility of a prison term. Because he hurt a lot of people. And, by the way, there is a precedent for this: In financial services, if the CEO and the executives lie about the financials, they can be held personally accountable.

Often in 2018 I made similar suggestions, based on the thought that our security industry would mature faster if a CSO personally can be held liable like a CEO or CFO (e.g. post-Enron SOX requirements):

And at Blackhat this year I met with Facebook security staff who said during the 2016-2017 timeframe the team internally knew the severity of election interference and were shocked when their CSO failed to disclose this to the public.

Maybe the Senator putting it all on the CEO today makes some sense strategically…yet also begs the question of whether an “officer” of security was taking payments enough to afford a $3m house in the hills of Silicon Valley while intentionally withholding data on major security breaches during his watch?

Given an appointment of dedicated officer in charge of security, are we meant to believe he was taking a big salary only to be following orders and not responsible personally? Don’t forget he drew press headlines (without qualification) as an “influential” executive joining Facebook, while at the same time leaving Yahoo because he said he wasn’t influential.

To be fair he posted a statement explaining his decision at the time, and it did say that safety is the industry’s responsibility, or his company’s, not his. Should that have been an early warning he wasn’t planning to own anything that went awry?

I am very happy to announce that I will be joining Facebook as their Chief Security Officer next Monday…it is the responsibility of our industry to build the safest, most trustworthy products possible. This is why I am joining Facebook. There is no company in the world that is better positioned to tackle the challenges…

There also is a weird timing issue. The start to the Russian campaign is when Facebook brings on the new CSO. Maybe there’s nothing to this timing, just coincidence, or maybe Russians knew they were looking at an inexperienced leader. Or maybe they even saw him as “coin-operated” (a term allegedly applied to him by US Intelligence) meaning they knew how easily he would stand down or look away:

  1. June 2015: Alex Stamos abruptly exits his first ever CSO role after failing to deliver on year-old promises of end-to-end encryption, and also failing to disclose breaches**, to join Facebook as CSO. Journalists later report this as “…beginning in June 2015, Russians had paid Facebook $100,000 to run roughly 3,000 divisive ads to show the American electorate”
  2. October 2016: Zuckerberg tries to shame outside critics/investigators and claim no internal knowledge… “To think it influenced the election in any way is a pretty crazy”
  3. January 2017: US Intelligence report conclusively states Russia interfered in 2016 election
  4. July 2017: Facebook officially states “we have seen no evidence that Russian actors bought ads on Facebook”
  5. September 2017: Facebook backtracks and admits it knew (without revealing exactly how soon) Russian actors bought ads on Facebook
  6. September 2017: Zuckerberg muddies their admission by saying “…investigating this for many months, and for a while we had found no evidence of fake accounts linked to Russia running ads”, which focuses on knowledge of fake accounts being used, rather than the more important knowledge Russia was running ad campaigns
  7. September 2017: Zuckerberg tries to apologize in a series of PR moves like saying “crazy was dismissive and I regret it” and asking for forgiveness
  8. October 2017: Facebook’s Policy VP issues a “we take responsibility” statement
  9. October 2017: Facebook admits 80,000 posts from 2015 (i.e. from when Stamos started as CSO) all the way to 2017 (i.e. when Stamos was still CSO) reached over 120 million people. Stamos brands himself both as the influential officer in charge of uncovering harms yet also a wall flower paid an officer salary to not speak out. It does somehow come back to the point that the Russian Internet Research Agency allegedly began operations only after Stamos’ joined. Even if it started before, though, he definitely did not disclose what he knew when he knew it. His behavior echoes a failure to disclose massive breaches while he was attempting his first CSO role in Yahoo! (see step 1 above)

Given the security failures from 2015 to 2017 we have to seriously consider the implications of a sentence that described Stamos’ priors, which somehow are what led him into being a Facebook CSO

At the age of 36, Stamos was the chief technology officer for security firm Artemis before being appointed as Yahoo’s cybersecurity chief in March 2014. In the month of February, Stamos in particular clashed with NSA Director Mike Rogers over decrypting communications, asking whether “backdoors” should be offered to China and Russia if the US had such access.

There are a couple problems with this paragraph, easily seen in hindsight.

First, Artemis wasn’t a security firm in any real sense. It was an “internal startup at NCC Group” and a concept that had no real product and no real customers. As CTO he hired outside contractors to write software that never launched. This doesn’t count as proof of either leadership or technical success, and certainly doesn’t qualify anyone to be an operations leader like CSO of a public company.

Second, nobody in their right mind in technology leadership let alone security would ask if China and Russia are morally equivalent to the United States government when discussing access requests. That signals a very weak grasp of ethics and morality, as well as international relations. I’ve spoken about this many times.

If the U.S. has access it in no way has implied other governments somehow morally are granted the same access. Moreover it was very publicly discussed in 2007 because Yahoo’s CEO was told to not give the Chinese access they requested (when Stamos was 28):

An unusually dramatic congressional hearing on Yahoo Inc.’s role in the imprisonment of at least two dissidents in China exposed the company to withering criticism and underscored the risks for Western companies seeking to expand there. “While technologically and financially you are giants, morally you are pygmies,” Rep. Tom Lantos (D., Calif.)

If anything these two points probably should have disqualified him to become CSO of Facebook, and that’s before we get into his one-year attempt to be CSO at Yahoo! that quickly ended in disaster.

In 2014, Stamos took on the role of chief information security officer at Yahoo, a company with a history of major security blunders. More than one billion Yahoo user accounts were compromised by hackers in 2013, though it took years for Yahoo to publicly report…Some of his biggest fights had to do with disagreements with CEO Marissa Mayer, who refused to provide the funding Stamos needed to create what he considered proper security…

Let me translate. Stamos joined and didn’t do the job disclosing breaches because he was campaigning for more money. He was spending millions (over $2m went into prizes paid to security researchers who reported bugs). While his big-spend bounty-centric program was popular among researchers, it didn’t build trust among customers. This parallels his work as CTO, which didn’t build any customer trust at all.

The kind of statements Stamos made about Artemis launching in the future (never happened) should have been a warning. Clearly he thought taking over a “dot secure” domain name and then renting space to every dot com in the world was a lucrative business model (it wasn’t).

I’m obviously not making this up as you can hear him describe rent-seeking with a straight face. His business model was to use a private commercial entity to collect payments from anyone on the Internet in exchange for a safety flag to hang on a storefront, in a way that didn’t seem to have any fairness authority or logical dispute mechanism.

Here is a reporter trying to put the scheming in the most charitable terms:

In late 2010, iSEC was acquired by the British security firm, NCC Group, but otherwise the group continued operating much as before. Then, in 2012, Stamos launched an ambitious internal startup within NCC called Artemis Internet. He wanted to create a sort of gated community within the internet with heightened security standards. He hoped to win permission to use “.secure” as a domain name and then require that everyone using it meet demanding security standards. The advantage for participants would be that their customers would be assured that their company was what it claimed to be—not a spoof site, for instance—and that it would protect their data as well as possible. The project fizzled, though. Artemis was outbid for the .secure domain and, worse, there was little commercial enthusiasm for the project. “People weren’t that interested,” observes Luta Security’s Moussouris, “in paying extra for a domain name registrar who could take them off the internet if they failed a compliance test.”

Imagine SecurityScorecard owning the right to your domain name and disabling you until you pay them to clean up the score they gave you. Dare I mention that a scorecard compliance engine is full of false positives and becomes a quality burden that falls on the companies being scanned? Again, this was his only ever attempt at being a CTO (before he magically branded himself a CSO) and it was an unsuccessful non-starter, a fizzle, a dud.

From that somehow he pivoted into a publicly traded company as an officer of security. Why? How? He abruptly quit Artemis by taking on a CSO role at Yahoo, demanding millions for concept projects more akin to a CTO than CSO. He even made promises upon taking the CSO role to build features that he never delivered. Although I suppose the greater worry still is that he did not disclose breaches.

It was after all that he wanted to be called CSO again, this time at Facebook. That is what Wyden should be investigating. I mean I’m fine with Wyden making a case for the CEO to be held accountable as a starting point, the same way we saw Jeff Skilling of Enron go to jail.

It makes me wonder aloud again however if the CFO of Enron, Andrew Fastow, pleading guilty in 2004 to two counts of conspiracy to commit securities and wire fraud…is an important equivalent to a CSO of Facebook pleading guilty to a conspiracy to commit breach fraud.

Stamos says he deserves as much blame as anyone else for Facebook being slow to notice and stamp out Russian meddling in the 2016 presidential election

Ironically Stamos, failing to get anywhere with his three attempts at leadership (Artemis, Yahoo and Facebook) has now somehow reinvented himself (again with no prior experience) as an ethics expert. He has also found someone to fund his new project to the tune of millions, which at Blackhat some Facebook staff reported to me was his way to help Facebook avoid regulations by laundering their research as “academic”.

It will be interesting to see if Wyden has anything to say about a CSO being accountable in the same ways a CFO would be, or if focus stays on the CEO.

In any case, after a year of being CSO at Yahoo and three years of being CSO at Facebook, Stamos’ total career amassed only four years as a head of security.

Those four years unmistakably will be remembered as one person who sat on some of the biggest security operations lapses in history. And his 2015 tout he was taking an officer role because “no company in the world is better positioned” to handle challenges of safety continues to produce this legacy instead:

Another month, another Facebook data breach.

Or to put it another way, here is how outside investigators described the Facebook CSO legacy:

Paul-Olivier Dehaye, a data protection specialist, who spearheaded the investigative efforts into the tech giant, said: “Facebook has denied and denied and denied this. It has misled MPs and congressional investigators and it’s failed in its duties to respect the law.

“It has a legal obligation to inform regulators and individuals about this data breach, and it hasn’t. It’s failed time and time again to be open and transparent.”


** The Class-action lawsuit against Yahoo security practices under Stamos provides the following timeline:

2014 Data Breach: In November 2014, malicious actors were able to gain access to Yahoo’s user database and take records of approximately 500 million user accounts worldwide. The records taken included the names, email addresses, telephone numbers, birth dates, passwords, and security questions and answers of Yahoo account holders, and, as a result, the actors may have also gained access to the contents of breached Yahoo accounts, and thus, any private information contained within users’ emails, calendars, and contacts.

2015 and 2016 Data Breach: From 2015 to September 2016, malicious actors were able to use cookies instead of a password to gain access into approximately 32 million Yahoo email accounts.


Update September 7th, 2019:

In another meeting with ex-Facebook staff I was told when “CEO and CSO are nice people” that should mean they don’t go to jail for crimes, because nice people shouldn’t go to jail.

This perspective has me wondering what the same people would say if I told them Epstein had a lot of friends who said he was nice. I mean their “nice” get out of jail free card suggests to me some kind of context change might help.

I will raise the issue in my CS ethics lectures first using an example outside the tech industry: Should the captain of sunken ship face criminal investigation for saving self as 34 passengers died in an early morning fire? Then I will ask about behavior of the CSO on deck during Yahoo and Facebook breaches.

A Sailor-Historian-Technologist Perspective on the Boeing 737 MAX Disaster

The tragedy of Boeing’s 737 product security decisions create a sad trifecta for someone interested in aeronautics, lessons from the past, and risk management.

First, there was a sailor’s warning.

We know Boeing moved a jet engine into a position that fundamentally changed handling. This was a result of Airbus ability to add a more efficient engine to their popular A320. The A320 has more ground clearance, so a larger engine didn’t change anything in terms of handling. The 737 sits lower to the ground, so changing to a more efficient engine suddenly became a huge design change.

Here’s how it unfolded. In 2011 Boeing saw a new Airbus design as a direct threat to profitability. A sales-driven rush meant efficiency became a critical feature for their aging 737 design. The Boeing perspective on the kind of race they were in was basically this:

Boeing had to solve for a plane much closer to the ground, while achieving the same marketing feat of Airbus, which said the efficiency didn’t change a thing (thus no costly pilot re-training). This is where Boeing made the critical decision to push their engine design forward and up on the wing,…while claiming that pilots did not need to know anything new about handling characteristics.

60 minutes in Australia illustrated the difference in their segment called “Rogue Boeing 737 Max planes ‘with minds of their own’” (look carefully on the left and it says TOO BIG next to the engine):


Don’t ask me why an Australian TV show didn’t call their segment “Mad Max”.

And that is basically why handling the plane was different, despite Boeing’s claims that their changes weren’t significant, let alone safety-related. The difference in handling was so severe (risk of stall) that Boeing then doubled-down with a clumsy software hack to flight control systems to secretly handle the handling changes (as well as selling airlines an expensive sensor “disagree” light for pilots, which the downed planes hadn’t purchased)

An odd twist to this story is that it was American Airlines who kicked off the Boeing panic about sales with a 2011 order for several hundred new A320. See if you can pick up a more forward and higher engine design in this illustration handed out to passengers.

I added this into the story because note again how Boeing wanted to emphasize “identical” planes yet marketed them heavily as different for even an in-flight magazine given to every passenger. It stands in contrast to how that same airline’s pilots were repeatedly told by Boeing the two planes held no differences in flight worth highlighting in documentation.

To make an even finer point, the Airbus A320 in that same airline magazine doesn’t have a sub-model.

While this engine placement clearly had been approved by highly-specialized engineering management thinking short-term (about racing through FAA compliance), who was thinking about serious instability long-term as a predictable cost?

The emerging safety problems led to a series of shortcut hacks and partial explanations that attempted to minimize talk about stabilizing or training for new flow characteristics, rather than admit huge long-term implications (deaths).

Boeing Knew About Safety-Alert Problem for a Year Before Telling FAA, Airlines

The Seattle Times posted clear evidence of pilots fighting against their own ship, unaware of reasons it was fighting with them.

Anyone who sails, let alone flies airplanes, immediately can see the problem in calling a 737 “Mad Max” the same as a prior 737 design, when flow handling has changed — one doesn’t just push a keel or mast around without direct tiller effects.

Some pilots say unofficially they knew the 737 “Mad Max” was not the same and, at least in America, were mentally preparing themselves for how to react to a defective system. Officially however pilots globally needed to be warned clearly and properly, as well as trained better on the faulty software that would fight with them for safe control of the aircraft.

Second, America has a “Widowmaker” precedent.

Years ago I wrote about pilot concerns with a plane of WWII, the crash-prone B-26.

The B-26 had a high rate of accidents in takeoff and landing until crews were trained better and the aspect ratio modified on its wings/rudder

That doesn’t tell the whole story, though. In terms of history repeating itself, evidence mounted this American airplane was manifestly unsafe to fly and the manufacturer wasn’t inclined to proactively fix and save lives.

A biographer of Truman gives us some details from 1942 Senate hearings, foreshadowing the situation today with Boeing.

Apparently crashes of the Martin B-26 were happening at least every month and sometimes every other day. Yes, crashes were literally happening 15 days out of 30 and the plane wasn’t grounded.

The Martin company in response to concerns started a PR campaign to gloat about how one of its aircraft actually didn’t kill everyone on board and received blessings from Churchill.

Promoting survivorship should be recognized today as a dangerously and infamously bad data tactic. Focusing on economics of Boeing is the right thing here. They haven’t stooped yet to Martin’s survivorship bias campaign, but it does seem that Boeing knowingly was putting lives at risk to win a marketing and sales battle with a rival, similar to what Tesla could be accused of doing.

Third, there are broad societal issues from profitable data integrity flaws.

Can we speak openly yet about the executives making money on big data technology with known integrity flaws that kill customers?

There’s really a strange element to this story from a product management decision flow. Nobody should want to end up where we are at today with this issue.

Boeing knew right away its design change impacted the handling of the product. They then added fixes in, without notifying their customers responsible for operating the product of the severity of a fix failure (crash).

I believe this is where and why the expanding number of investigations are being cited as “criminal” in nature.

  • Investigation of development and certification of the Boeing 737 MAX by the FAA and Boeing, by DoJ Fraud Section, with help from the FBI and the DoT Inspector General
  • Administrative investigation by the DoT Inspector General
  • DoT Inspector General hearings
  • FAA review panel on “certification of the automated flight-control system on the Boeing 737 MAX aircraft, as well as its design and how pilots interact with it”
  • Congressional investigation of “status of the Boeing 737 MAX” for US House Transportation and Infrastructure Committee’s Transportation and Infrastructure Committee

These investigations seem all to be getting at the sort of accountability I’ve been saying needs to happen for Facebook, which also suffered from integrity flaws in its product design. Will a top executive eventually be named? And will there be wider impact to engineering and manufacturing ethics in general? If the Grover Shoe Factory disaster is any indication, the answers should be yes.

In conclusion, if change in design is being deceptively presented, and the suffering of those impacted is minimized (because profits, duh), then we’re approaching a transportation regulatory moment that really is about software engineering. What may emerge is these software-based transportation risks, because fatalities, will bring regulation for software in general.

Even if regulation isn’t coming, the other new reality is buyers (airlines, especially outside the US and beyond the FAA) will do what Truman suggested in 1942: cancel contracts and buy from another supplier who can pass transparency/accountability tests.

Fruit Fly Movements Imitated by Giant Robot Brain Controlled by Humans

They say fruit flies like a banana, and new science may now be able to prove that theory because robot brains have figured out that to the vector go the spoils.

The Micro Air Vehicle Lab (MAVLab) has just published their latest research

The manoeuvres performed by the robot closely resembled those observed in fruit flies. The robot was even able to demonstrate how fruit flies control the turn angle to maximize their escape performance. ’In contrast to animal experiments, we were in full control of what was happening in the robot’s ”brain”.

Can’t help but notice how the researchers emphasize getting away from threats with “high-agility escape manoeuvres” as a primary motivation for their work, which isn’t bananas. In my mind escape performance translates to better wind agility and therefore weather resilience.

The research also mentions the importance of rapidly deflating costs in flying machines. No guess who would really need such an affordable threat-evading flying machine.

I mean times really have changed since the 1970s when

Developed by CIA’s Office of Research and Development in the 1970s, this micro Unmanned Aerial Vehicle (UAV) was the first flight of an insect-sized aerial vehicle (Insectothopter). It was an initiative to explore the concept of intelligence collection by miniaturized platforms.

The Insectothopter was plagued by inability to fly in actual weather, as even the slightest breeze would render it useless. In terms of lessons learned, the same problems cropped up with Facebook’s (now cancelled) intelligence collection by elevated platform.

On June 28, 2016, at 0743 standard mountain time, the Facebook Aquila unmanned aircraft, N565AQ, experienced an in-flight structural failure on final approach near Yuma, Arizona. The aircraft was substantially damaged. There were no injuries and no ground damage. The flight was conducted under 14 Code of Federal Regulations Part 91 as a test flight; the aircraft did not hold an FAA certificate of airworthiness.

Instead of getting into the “airworthiness” of fruit flies, I will simply point out that “final approach” is where the winds blow and the damage occurred. If only Facebook had factored in some escape performance maximization to avoid the ground hitting them so dangerously when they landed.

Lessons in Secrets Management from a Navy SEAL

Good insights from these two paragraphs about the retired Rear Admiral Losey saga:

Speaking under oath inside the Naval Base San Diego courtroom, Little said that Losey was so scared of being recorded or followed that when the session wrapped up, the SEAL told the Navy investigator to leave first, so he couldn’t identify the car he drove or trace a path back to his home.

[…]

…he retaliated against subordinates during a crusade to find the person who turned him in for minor travel expense violations.