Egregious Misconduct Lawsuit For 2014 Yahoo Security Management

It was the 10th of March 2014, the bugles were blaring. A red carpet was unrolled.

Who was this man of mystery coming into view? He came with no prior CSO experience, let alone large operation skills. Suddenly out of nowhere, front and center of Yahoo’s own financial news site was the answer:

Watch out, Google. The rumors are true. Yahoo has officially stepped up its security A-game. It’s called Alex Stamos.

Yahoo announced yesterday that it hired the world-renowned cybersecurity expert and vocal NSA critic to command its team of “Paranoids” in bulletproofing all of its platforms and products from threats that will surely come.

The headline-grabbing hire is widely being viewed as Yahoo’s attempt to restore its reputation for trustworthiness in the fallout of a recent rash of ad-related malware attacks that jeopardized millions of its users’ identifying data.

Watch out, Google? Vocal NSA critic? These seem like battles on two fronts. Is there anyone he doesn’t have a beef with?

Never have I seen a CSO hire being celebrated as a threat to organizations they most certainly will have to work with, even if there are disagreements. Surely that starts the role off with headwinds that reduce his effectiveness. The whole tone of the PR piece seems ignorant of what a CSO is and does.

Let’s take a closer look here. The article, after bizarrely casting shade at Google’s security team, gave a quick run-down that revealed a lack of relevant experience:

Before coming aboard at Yahoo, Stamos served as chief technology officer of Artemis, a leading San Francisco-based Internet security firm that specializes in .secure Top-Level Domain security (TLD), over the last year and 10 months, according to his LinkedIn profile. Prior to his stint at Artemis, he co-founded iSEC Partners “with good friends.” Artemis’s parent company NCC Group acquired the pioneering security firm in late 2010.

Before launching iSEC Partners, Stamos held a two-year post as a managing security architect at @stake, Inc., a digital security company that helped corporations secure their critical infrastructure and applications. Symantec acquired @stake, Inc. in late 2004. Stamos also worked as a senior security engineer for nearly two years at LoudCloud, a software company now called Opsware that operates out of the same city Yahoo calls home base.

Leading what?

Artemis was a leader? I don’t see how anyone could claim that. Note that “according to his LinkedIn profile” is used to qualify because there wasn’t any other source of that kind of nonsense.

Ok, so think back to 2014. Here’s a guy within an unknown “company”, which never achieved solid customer base or a real product, claiming to be an industry leader as CTO. The leap into a CSO role at Yahoo, fighting against BOTH the government (NSA) and non-government (Google)…sounded fishy then, and sounds fishy now. Who wanted him in so badly they’d ignore the lack of quals?

Let’s be clear, since Artemis never was independent of NCC, we’re talking about a guy with a CTO title on a small team and that abruptly folded. This is not a stepping-stone into a large publicly traded operations management. And his record before doesn’t help either, with some security research and consulting in three small teams, with no large organizational/management experience anywhere.

The lack of experience matters not only for the obvious reason of being qualified to do the work, it also begs a question of what is reasonable to expect in terms of conduct (and chances of misconduct). Risk management is the bedrock of a CSO role. In 2014 Yahoo was hiring someone who hadn’t been in that hot seat ever in his career. Lacking a track record in responding to incidents, events and breaches from an operations management position, and yet he was getting a lot of unusual fanfare from his new employer.

More to the point, he had been billing himself as the best person to lead Yahoo, and didn’t seem to mind getting himself framed as someone to make them bulletproof even for future threats. That’s crazy PR talk.

It was the “we’re going to build a wall and make Mexico pay for it” campaign of the infosec industry.

Fast forward to January 23, 2019 and look at the hundreds of millions of dollars in losses from security leadership failures:

According to the S.E.C., “In late 2014, Yahoo had learned of a massive breach of its user database that resulted in the theft, unauthorized access or acquisition of hundreds of millions of its user’s personal data.” The agency further alleged that “Yahoo senior management and relevant legal staff did not properly assess the scope, business impact or legal implications of the breach” and “did not share information regarding the breach with Yahoo’s auditors or outside counsel.”

Yahoo didn’t disclose the breach until September 2016, when it was negotiating the sale of its internet business to Verizon. Although the transaction was completed, the acquisition price was lowered by $350 million to $4.48 billion.

Guess what happened in between 2014 when this man started his first-ever CSO role and 2016? No, I’m not talking about the $2 million he handed out in “bounties” to his friends in the security research industry. They of course to this day are very happy about what he did for them.

No, I’m not talking about his pre-announcement of encrypted end-to-end email as a product feature he would bring to customers, before he even had a team hired to work on it, and failing to deliver.

Following up on a promise it made during last summer’s Black Hat, Yahoo on Sunday said it’s on track to deliver end-to-end encryption for its email…”Just a few years ago, e2e encryption was not widely discussed, nor widely understood. Today, our users are much more conscious of the need to stay secure online,” Stamos wrote on Yahoo’s Tumblr.

That kind of awareness is what Heartland boasted about in 2009. Had Stamos been a CSO before, he might have known e2e was widely discussed.

In fact, a more sensible thing would have been for him to say people have talked for too long about e2e, basically everyone can understand it, now someone needs to deliver on their promise in order to earn trust. RedPhone and TextSecure (precursors to Signal) were launched in 2010, incidentally (no pun intended).

It almost seemed like he was trying from the start of his role as an officer to juice the stock price with giant pre-announcements, using user awareness being a goal, allowing hazy engineering delivery requirements to slide around.

During Stamos’s career, Yahoo pledged to offer a fully encrypted email service and moved to add encryption to more of its websites. It’s vague what his departure means for those efforts…

It means nothing he promised was delivered. The same thing with Artemis too. He left both so abruptly, his direct reports weren’t aware, people working on his team allegedly found out day of his departure.

Instead of making Yahoo bulletproof, as his PR would have had everyone believe on the way in, he ran for self-cover when shots rang out. I’d love to be wrong about this, beg of you to prove me wrong, but I see facts where Artemis and Yahoo followed the same pattern:

Unable to fly his plane, a pilot stands up and waves at his own staff he’s leaving behind, leaps with a golden parachute.

More than a few Paranoids told me they weren’t pleased to hear such a neophyte CSO after he fumbled management of privacy protections was going to the notoriously anti-privacy company Facebook. His words:

I had a wonderful time at Yahoo and learned that the Yahoo Paranoids truly live up to their legend. Their commitment, brilliance, drive and pioneering spirit made it a pleasure to roll up our sleeves and get to work.

Here’s what that translated into almost immediately:

Facebook allowed Microsoft’s Bing search engine to see the names of virtually all Facebook users’ friends without consent, the records show, and gave Netflix and Spotify the ability to read Facebook users’ private messages.

The social network permitted Amazon to obtain users’ names and contact information through their friends, and it let Yahoo view streams of friends’ posts as recently as this summer, despite public statements that it had stopped that type of sharing years earlier.

Facebook has been reeling from a series of privacy scandals, set off by revelations in March that a political consulting firm, Cambridge Analytica, improperly used Facebook data to build tools that aided President Trump’s 2016 campaign. Acknowledging that it had breached users’ trust…

Integrity tools were seriously lacking, despite growing evidence of problems and alerts going off everywhere. Facebook’s CSO apparently was allowing huge breaches of customers and his boss was forced by external reporting to describe his role as a failure.

In 2017, ProPublica reporters (two of whom are now at The Markup) found that advertisers could target people who were interested in terms like “Jew hater,” and “History of ‘why jews ruin the world.'” In both cases, Facebook removed the ad categories in question. In response to the ProPublica findings, Facebook COO Sheryl Sandberg wrote in a 2017 post that the option to target customers based on the categories in question was “totally inappropriate and a fail on our part.” It appears Facebook has maintained a “pseudoscience” ad category group for several years.

What’s most interesting about such a massive disaster under the CSO is really the style and type of PR moves. He used them again as he picked up his new role, talking about how he’s the one who can make the data safe at his second CSO job. He hadn’t delivered measurable risk reduction in his first job, yet someone wanted him at Facebook and he had boastful press. Who again?

Did anyone think Yahoo really was playing its “security A-game” when the head of security couldn’t put together enough executive presence to get anyone, literally a single peer, to do the right thing? If a public company is breached at this level, and a CSO can’t move the dial on disclosure, in what sense are they actually performing the job?

This man who ran wordy PR campaigns about “bulletproofing all of its platforms and products from threats that will surely come” turned out to be all hat and no cattle hiding under the mountain of cash he was getting paid and paying others.

In that sense, shareholders and customers are right to be angry about the performance of management. Such high-boasts and failure to deliver should not be taken lightly by regulators.

Sure, if someone takes a CSO role and fumbles it there generally should be accountability. In this case, however, we’re talking about a person who regularly ran high-profile self-promotional events promising future safety above and beyond others. It even seemed harmful to the reputation of the prior CSO, the way PR was used to describe the transition to him as a superior leader. He put himself in a much different situation than any other CSO we see in the industry.

The op-ed in the NYT puts it rather bluntly:

…director and officer liability for cybersecurity oversight is entering new and potentially perilous territory. That is especially so in cases like Yahoo’s, in which shareholders allege egregious misconduct at the highest levels of an organization.

Our security industry soon may be entering a post-Stamos era, where we could build guidelines to prevent someone of his boastful nature from taking a role they are unable to conduct. I’ve heard meetings have been underway for a while, and maybe even standards being drafted.

The 2014 CSO breach management at Yahoo alone was staggeringly awful, destroying trust in the brand. That was when disclosure should have happened. Instead he quietly left and went on to do it all again at his next job. That’s the real case, since there isn’t any related prior work to reference, just a repeat disaster performance.

I’ve seen very few people in security run press using such boastful praise heaped on themselves for an operations job that benefits most from a low profile. It makes little sense. And he did it twice, his only two attempts to be a CSO.

What really gets me, however, is running that kind of “I bring more influence than everyone else” PR story, then claiming the opposite after a breach, he didn’t have the ability to influence, as threats flowed through his celebrated fingers.


Update April 23, 2020: added additional integrity breach details under the CSO, including one that has been ongoing until now.

Do Walls Work?

Strangely enough I’ve been getting this question lately from people who believe I might have an answer. Little do they realize how complicated the answer really is.

The short answer is (from a political economy view) that walls will be said to work when someone is trying to get them funded, and will be said to not work when the same people (or those who follow their folly) try to get all the other things funded (because walls easily fail, as everyone familiar with security can predict).

Before I go much further, let me briefly turn to the philosophical question of walls. One of the most famous Muslim scholars in the world, Muhammad Ali, probably best exemplified the answer to any questions about walls “working”. Here’s a eulogy to his wisdom, worth a watch in its entirety. For purposes here I’ve started towards the end at the relevant quote:

…life is best when you build bridges between people, not walls.

So if the celebrated genius of a fighter Ali tells us life is best with bridges, why build walls at all? And if security experts (defenders and attackers) so easily predict failures, why spend money on them? These are the kinds of questions every CSO should be well-prepared to answer. It’s basically the “why should I fund your project to disable connections, when the point of business is to enable them” meeting.

This goes to the heart of the Anti-Virus (AV) industry, and the current derivatives (Clownstrike, Cylance…you know who I’m talking about).

In the beginning days of viruses (early 1980s) there were theories about positive security models, which measured system integrity in a way today we talk about “whitelists”. If you want to run something on a computer you boil it down to the essence, the most efficient model and description, such that anything out of that ordinary baseline could be flagged as unusual or even adversarial.

Such a model of safety isn’t revolutionary by any stretch of the imagination. It was a simple case of people with some knowledge of the healthcare industry saying computer viruses could be detected by looking for what is abnormal, and emphasizing a thorough scientific understanding of what is normal.

Well, in healthcare there is motivation to spend on establishing such knowledge because “healthy” is valuable state of being. In computers, however, there was a giant loophole preventing this kind of science being developed. Companies like McAfee realized right away that if you just scare people with fear, uncertainty and doubt about imminent invasion by caravans of viruses you can get them to throw money into a wall (even though it doesn’t work).

I would make the usual snake-oil reference here, except I have to first point out that snake-oil has real health benefits.

…snake oil in its original form really was effective, especially when used to treat arthritis and bursitis.

The concept of a snake-oil salesman refers to some shady American guy stealing Chinese ideas and using cheap counterfeits to profit on harm to customers. Thus the McAfee model of building walls (today we talk about “blacklists” being ineffective, when really we could say fake snake-oil) for huge amounts of money started around 1987. At that time McAfee the man himself created a company to collect money for delivering little more than a sense of safety, while attackers easily bypassed it.

Unfortunately consumers bought into this novelty wall sold by McAfee, despite being mostly nonsense. The oportunity cost was massive and the security industry has taken decades to recover. Innovators trying to compete by achieving any kind of security “science” in operations were obviously far less profitable compared to the raft of snake-oil McAfee marketing executives.

Consider for example in 1992 McAfee told the world an invasion was coming and they needed him to build some more walls.

McAfee was blamed for creating a false threat to sell more of his anti-virus elixir – which he did. McAfee’s anti-virus software sales reportedly “skyrocketed” that year, with more than half of the companies in the Fortune 100 having purchased McAfee software. Of course, this only furthered the theory that McAfee had just made up the whole damn thing.

He retired after this, scooping up millions in profit by building walls that didn’t work for a threat that didn’t exist.

To be fair, threats do exist, and walls do have a role to play. Hey, after all we do use firewalls too right? And firewalls have proven themselves useful in a most basic way too, by having attackers shift to an application layer when all the other service ports are down.

In other words firewalls work in the way that building a wall could end up dramatically increasing threats coming through airports, seaports and even underground. Basically air, sea and land threats could increase and be detected less easily by building a wall. When I used to pentest utilities for example, we rated walls as significantly less effective at stopping us versus six-sided boxes (buildings, if you will).

True story: on a datacenter pentest I approached two layers of walls. The first was easily bypassed and then I used some engineering to get through the second one. It was only at that point I realized I was in the wrong location. Datacenters used to be careful to avoid having any outward logos or markings, even obfuscating their address. In this case it worked! After getting myself through two walls without much thought, I was looking right at an ICE logo and a bunch of guns.

Yes, I accidentally had tested the Immigration and Customs Enforcement (ICE) facility…and immediately began egress. Getting out quickly took some creativity, unlike getting in, and ended up being a better skills test. In the end it was fine, a laugh for everyone, including the datacenter (which I did test immediately after).

So in the strictest sense, walls have some work to do, and they may be capable of delivering. This is very different from saying walls work, however, when people are thinking in the broader sense of being safe from harm. A con-man like McAfee can vacuum up money to get rich while delivering almost no value, because “walls work” is a tiny grain of truth in his giant cake factory shipping nutrition-less lies about health (risk and safety).

1600s: Que ne mangent-ils de la croûte de pâté? (let them eat forcemeat crust!)

1700s: Qu’ils mangent de la brioche! (let them eat cake!)

1990s: Let me install AV!

2019: Let me build a wall!

Almost a decade ago I did a small speaking tour about cloud security on this topic, although I used the Maginot line as an example. This massive defensive wall was named after 1930s French Minister of War André Maginot, and constructed along the country’s border with Germany.

I pointed out that one could argue the Maginot line forced attackers to shift tactics and use other entry methods. In that sense those walls did some actual work, like a firewall or AV will do for you today.

However, expectations of the French were for the wall to prevent the very thing that happened (rapid invasion past their borders). In fact, the forces at the wall became so irrelevant, they still stood ready and willing to fight even after the French government capitulated to the Nazis. Let’s face it, had the French military leadership simply listened to all the active warnings about Nazis going around the line, France likely would have ended up saying the wall did a job to help focus their active response (they could have directed defenses to neutral country borders that had no walls).

The French leadership failed to notice something was not normal (enemy troops moving through the Ardennes Forest and violating neutral countries). And that is why Maginot’s expensive wall continues to be almost universally remembered as a huge failure. (Some do still argue, as I did too, that Maginot’s plans worked within an extremely narrow assessment).

A “Manstein Plan” directed Nazi tanks through the Ardennes Forest to exploit Maginot Line weakness in uncompleted areas (despite 1938 General André-Gaston Prételat exercises predicting this exact issue by driving his tanks through that Forest). That Manstein Plan maybe should really just have been called the Prételat Report? Source: Martin Marix Evans, Invasion! Operation Sea Lion 1940 (Routledge; 1st edition 9 Sep. 2004) page 37

I don’t think any French to this day would say their wall worked however, given how it was billed to them at the time of funding (for an extremely high cost, which weakened more modern/important security needs like detection and radio/aero/rapid response).

For the French, the greatest failing of the Maginot Line arguably lay not in its conception, but in the opportunity costs that its construction imposed. The 87 miles of fortifications that were completed by 1935 cost some 7 billion francs ($8 billion in 2015 terms), over twice the initial estimate when the effort began in 1930. Depending on the source, the entire French defense budget in 1935 was between 7.5 (John Mearsheimer) and 12.8 billion (Williamson Murray) francs. As a result of this stupendous outlay, French military development in all other areas, from tanks to aircraft, suffered.

In other words, the current US regime is looking at data suggesting airports are the vulnerable path for entry and yet is proposing money be spent on something completely unrelated to airports. France in this scenario would be looking at data suggesting forests and neutral countries are the vulnerable paths for entry and blowing its budget on a wall elsewhere.

Terrorists trying to infiltrate the U.S. across our southern border was more of a theoretical vulnerability than an actual one…the figure she seems to be citing is based on 2017 data, not 2018, and refers to stops made by Department of Homeland Security across the globe, mainly at airports.

Does a wall on the border help with the real vulnerability in airports? No. The wall expense actually hurts, making the US materially less safe. One might conclude that shutting the government down, reducing active defenses at airports, to force a redirection of security funds to a useless wall is a very cynical plot that any hostile adversary would dream about.

To put a finer point on it, the expensive shutdown and the demand for an expensive wall both reflect the self-harming anti-American mindset of the current regime, and present grave dangers to US national security.

The long answer is thus that walls work at a very primitive level, which tends not to be worth the cost except in very particular cases where the predicted results are known and wanted. In the present context of the US border, there is no imminent threat and there is little to no chance of success without massive investment in detecting other methods of entry predicted (again, for a non-imminent threat).

There’s a reason AV is mostly free today. And it’s the same reason building a wall on the US border has been pitched as extremely expensive response to a fantasy threat, meaning it has little to no real value. Someone is trying to redistribute wealth and quit before people realize the walls are a distraction, where wasting time and money turns out to have been the objective (to hurt America).

History is pretty useful here, as we can easily prove things like walls have for thousands of years failed to prevent people climbing up (and down) them.

It is believed that the idea of a ladder was used over 10,000 years ago. We know this because pictures of them were discovered in a cave in Spain.

The ladder is also mentioned in the Bible. Jacob had a dream and in the dream he saw a ladder reaching from Heaven to earth.

Fun fact, ladders are much older than wheels. That’s right, ladders are more than twice as old as the wheel! And we obviously can say walls came before ladders. Thus always remember, when someone asks you which is older the wheel or the wall, go with the ladder (pun intended).

It remains to be seen, however, whether this sort of wall debate and debacle making the US less safe is going to force the US regime leader to step-down.

Incidentally, the Maginot example was not my only one on that speaking tour. Since I was invited to speak in England as well as the US, I thought it only fitting in 2010 that I use castle walls as an example of technology shifts, like a cannon, sawzall or a hypervisor escape vulnerability…the kind of inexpensive and fast-moving thing that makes wall builders shudder:

At Least Five LiDAR Challenges for Vehicles

Sensors Online has a nice summary of the current product management view for LiDAR manufacturers. They spell out these five concerns:

  1. Size
  2. Cost
  3. Reliability
  4. Range
  5. Eye Safety

Conspicuously missing from the list (pun not intended) is integrity of the data.

Reliability in the above list refers only to environmental risks (“replace the moving parts with a solid-state alternative with each component able to meet Grade 1 temperature and qualification”) instead of the sort of overconfidence in imagery I’ve spoken about in the past (August 2014 “Babar-ians at the Gate: Data Protection at Massive Scale,” Blackhat USA)

To be fair, the article is kind of a hidden marketing pitch, written by a company promoting its new line of products:

…patented flip-chip, back-emitting VCSEL arrays that combine high pulsed power arrays, integrated micro-optics, and electronic beam steering on a chip.

So it makes sense they aren’t going to talk about the more fundamental flaws in LiDAR that their company/product isn’t solving.

EV Charging Station Vulnerability

Anyone else read this article about the bug in a Schneider product?

At its worst, an attacker can force a plugged-in vehicle to stop charging

At its best, an attacker can give away power for free.

That’s basically it. A hardcoded password meant the power could be disabled, although really that means it could be enabled again too. Breaking news: a switch installed in public places could be switched without special switching authorization.

It’s kind of like those air pumps at gas stations that say you need to insert $0.25 when really you just have to push the button, or at least yell at the person in the station booth “hey I need some air here, enable the button so I can push it” and you get air for free.

Breaking news: I got some air for my tires without inserting a quarter. Someone call TechCrunch journalists.

Seriously though, it would be news if someone actually had to pay for a plugged-in tire to start filling.

If a gas station owner insists that you have to pay for air even after you’ve used the pump, stand your ground. If that doesn’t work, here’s the form to report the station to state officials.

That’s right, and speaking of denial of service…an attacker could even run off with a gasoline pump hose (they have safety release mechanisms) or an air hose. Such a brazen attack would leave cars that have tires and gas tanks without services when they pull into a station.

Fuel host disconnects do happen fairly often, by the way. So often there are even videos and lots of news stories about why and where it happens (protip: bad weather is correlated):

And yet TechCrunch wants us to be scared of EV cables being disconnected:

…unlock the cable during the charging by manipulating the socket locking hatch, meaning attackers could walk away with the cable.

Safety first, amiright? Design a breakaway and attackers can walk away with the cable…for safety.

Such a “vulnerability story” as this EV one by TechCrunch makes me imagine a world where the ranking of stories has a CVSS score attached…a world where “news” like this can theoretically never rise above stories with a severity actually worth thinking about.

An attacker could disable or enable a charging point, where charging status is something easily monitored and on a near-continuous basis. Did your car just stop charging? It’s something you and the provider ought to know in the regular course of using a power plug.

This ranks about as low as you can go in terms of security story value…yet a journalist dedicated a whole page to discuss how a public power-plug can be turned on and off without strong authentication.