Yet “Unother” heartbleed Perspective (YUhP)

With so many people talking about heartbleed and offering their insights (e.g. excellent posts from Bruce Schneier and Dan Kaminsky) I could not help but add my own. That is not entirely true. I was happy to let others field questions and then reporters contacted me and wanted insights. After I sent my response they said my answers were helpful, so I thought I might as well re-post here.

So this is what I recently sent a reporter:

What is Heartbleed?

Heartbleed is a very small change made to small part of code that is widely used to protect data. You might even say it is a flaw found in the infrastructure that we all rely upon for privacy. It is not an understatement to say it impacts just about everyone who has a password on the Internet. It’s basically like discovering all your conversations for the past two years that you thought were private actually could have been heard by someone without any effort. This is very dangerous and why it had to be fixed immediately. Potential for harm can be huge when trusted systems have been operating with a flaw. It is hard to quantify who really has been impacted, however, because the damage is a leak rather than an outage. We could look for evidence of leaks now, because people trying to take advantage of the leak will leave particular tracks behind, but it is very unlikely tracks will have been preserved for such a long time, since the code change was made. The change unfortunately was not recognized as dangerous until very recently.

How is it related to encryption and the websites I use?

The simple explanation is that encryption was used on websites (as well as many other sites, including email) to protect data. Encryption can prevent someone from seeing your private information. A change in code, known as OpenSSL, used for encryption actually undermined its ability to protect your data. Heartbleed means someone from a remote location can see data that was believed and intended to be private. Your password, for example, would have been seen by someone who knew of this OpenSSL flaw.

If possible, how can I protect myself now that it’s happened?

You can protect yourself going forward with two simple steps. First verify the sites you use have fixed the heartbleed flaw. Often they will push a notice saying they have addressed the problem, or they will post a notice that is easy to find on their site, or you can consult a list of sites that have been tested. Second, change your passwords.

Another way to protect yourself is to get involved in the debate. You could study the political science behind important decisions, such as when and how to trust changes, or the economics of human behavior. You also could study the technical details of the code to join the debate on how best to improve the quality that everyone may rely upon for their most trusted communication.

The reach of this story is amazing to me. It is like information security just became real for every person in every home. I sat on a bench in an airport the other day and listened to everyone around me give their (horribly incorrect) versions of heartbleed. Some thought it was a virus. Some thought it was related to Windows XP. But whatever they said, it was clear they suddenly cared a lot about whether and how they can trust technology.

I was probably too bullish on the traces/trail part of my answer. It is hard to stay high level while still figuring out some of the details underneath. I haven’t yet built a good high-level explanation for why the attack is not detectable by the system itself but that attack traffic has some obvious characteristics that can be captured by the system.

Anyway, this clearly boils down to code review. It is a problem as old as code itself. A luminary in the infosec space recently yelled the following on this topic:

THIS IS CALLED A BOUNDS CHECK. I SAW CODING ERRORS LIKE THIS IN THE 70’S

We know there are people very motivated to pore over every memcpy in an OpenSSL codebase, for example, to look for flaws. Some say the NSA would have found it and used it, but in reality the threat-scape is far larger and the NSA officially has denied awareness.

We also know that finding a really bad bounds check does not necessarily mean any obligation to report it in a “fair” way that minimizes harm, which is a harsh reality that begs the question of human behavior. Before looking to deeply at the technical aspects of the problem, consider Bruce’s perspective:

This may be a massive computer vulnerability, but all of the interesting aspects of it are human.

If you are Bruce, of course you would say that. He finds human aspects interesting, with all due respect, because it is the less familiar angle to him — the focus of his new research. However, most people are unfamiliar with technology, let alone encryption and keys, so the human aspects are the less interesting angle and they want the technical problem explained. XKCD sees this latter demand. That’s why we now have the following beautiful explanation of technical aspects:

heartbleed

With that in mind I still actually agree with Bruce. The industry really needs to dig deep into the following sequence of events related to trusted infrastructure change controls and bug discovery. This is what I’ve gathered so far. We may learn more and revise this in the future, but I hope it helps illustrate the sort of interesting human aspects to sort out:

  1. OpenSSL code that created heartbleed is committed an hour before midnight on New Years Eve 2011
  2. Codenomicon in early 2014 started tested an alpha piece of their Defensics fuzz product called Safeguard — their automated tool in April finds a flaw in the authentication layer of OpenSSL messaging
  3. Flaw is privately communicated by Codenomicon to CERT in Finland
  4. Someone at Google finds the same flaw and notifies OpenSSL
  5. OpenSSL issues a patch and flaw goes public, leaving many scrambling to respond on an immediate basis

One other thought. I alluded to this in my answer to the journalist but I want to make a finer point here. Some are calling this a maximum level danger event. That perspective begs whether data being destroyed, changed or denied could ever been seen as more dangerous. To a cryptography community the 11 out of 10 event may be different to the availability community. That actually seems to be one of the reasons I have seen management allow encryption to fail — availability risks were seen as more dangerous than confidentiality risks when unfortunately there was a trade-off.

Updated to add: Google staff have started to actively dispute claims anyone found the bug earlier than their company did. Microsoft and Facebook offered money to the Google person who proved the bug to them first, but the money was redirected to a charity rather than accepted.

A timeline favoring Google’s interpretation of events, with the vague discovery listed as “March 21 or before,” has been published by a paper in Sydney. Note the author request:

If you have further information or corrections – especially information about what occurred prior to March 21 at Google – please email the author…

This makes it sound like Google needs help to recall events prior to March 21, or doesn’t want to open up. Codenomicon claims were that it had been testing months prior to discovery. In any case, everything seems to initiate around September 2013, probably not by coincidence and begging the question of human issues more than technical ones.

Troubled Audit Waters: Trustwave and the Target Breach

My last post is probably overkill on the Microsoft topic so here’s a TL;DR version of one aspect of that story.

Microsoft mentions an independent auditor will help them avoid risk in the future. In order to not violate privacy of their customers without due cause, they will ask a specific 3rd party attorney of their choosing for opinion on the matter.

That does not give me much confidence. It seems only slightly less likely to fail, at least in obvious terms of independence.

Take a look at an important related story in the news: Target’s QSA (qualified security assessor) Trustwave, who was meant to help stop privacy violation of payment cardholders, is being sued by banks.

There are two parts to the story. One is that an assessor is in a complicated responsibility dance with their client. Did the client fail in their burden to disclose details to the assessor? Did the assessor fail to notice this failure? Did the assessor intentionally overlook failures? The debate over these problems is ancient and the lawsuits are likely to draw from a large body of knowledge, driven in some part by the insurance industry.

The other part of the story is that Trustwave apparently was running a portion of security operations at Target, not just assessing them for adequacy of controls. This is the more interesting angle to me because it seems like a relatively easy risk to avoid.

An assessor is meant to test controls in place. If the control in place is run by the same company as the one assessing its adequacy, then independence is dubious and a conflict-of-interest test is required.

For example, assessor Alice finds Retailer has inadequate IDS. Alice recommends Retailer replace existing and buy new IDS service from service provider Bob. Bob sets up IDS services and then Alice says Retailer has adequate IDS controls. Then Retailer is breached and people notice Alice and Bob work for the same company. Lawyers ask if Alice was conspiring with Bob to sell IDS and rubber-stamp assessments, without regard to actual compliance requirements.

Companies have internal auditors test internal controls all the time, so it’s not impossible or improbable to have a single authority sit above and manage both roles. Independence is best served transparently. However, one of the primary benefits of bringing in a 3rd party independent assessment is the most clear form of independence from any operational influences.

Bottom-line is Trustwave was known for selling services and assessing those services in order to maximize income opportunities and grow their practice size; they found a more lucrative but far less clean business model that now begs the question of adequate separations. If the Target investigations question the model then it could change the industry.


Update March 29: Trustwave’s CEO Robert McCullen has posted an announcement, specifically mentioning the conflict-of-interest issue.

In response to these legal filings, Trustwave would like to reassure our customers and business partners that these claims against Trustwave are without merit, and that we look forward to vigorously defending ourselves in court against these baseless allegations.

Contrary to the misstated allegations in the plaintiffs’ complaints, Target did not outsource its data security or IT obligations to Trustwave. Trustwave did not monitor Target’s network, nor did Trustwave process cardholder data for Target.

As I said, this is a key issue to watch in the dispute.

#Hotmailgate: Where Don’t You Want to Go Today?

I thought with all the opinions flowing about the Hotmail privacy incident I would throw my hat into the ring. Perhaps most notably Bruce Schneier has done an excellent job warning people not to believe everything Google or Facebook is saying about privacy.

Before I get to Bruce’s article (below) I’d like to give a quick summary of the details I found interesting when reading about the Microsoft Hotmail privacy incident.

How it Begins

We know, for example, that the story begins with a Microsoft employee named Alex Kibkalo who was given a less-than-stellar performance review. This employee, a Russian native who worked from both Russia and Lebanon, reacted unfavorably and stole Microsoft pre-release updates for Windows RT and an Activation Server SDK.

Russia? Lebanon?

Perhaps it is fair to say the software extraction was retaliatory, as the FBI claims, but that also seems to be speculation. He may have had other motives. Some could suggest Alex’s Russian/Lebanese associations could have some geopolitical significance as well, for example. I so far have not seen anyone even mention this angle to the story but it seems reasonable to consider. It also raises the thorny question of how rights differ by location and nationality, especially in terms of monitoring and privacy.

Microsoft Resources Involved

More to the point of this post, from Lebanon Alex was able to quickly pull the software he wanted off Microsoft servers to a Virtual Machine (VM) running in a US Microsoft facility. Apparently downloading software all the way to Lebanon would have taken too long so he remotely controlled a VM and leveraged high speeds and close proximity of systems within US Microsoft facilities.

Alex then moved the stolen software from the Microsoft internal VM to the Microsoft public Skydrive cloud-based file-sharing service. With the stolen goods now in a place easily accessible to anyone, he emailed a French blogger.

The blogger was advised to have a technical person use the stolen software to build a service that would allow users to bypass Microsoft’s official software activation. The blogger publicly advertised on eBay the activation keys for sale and sent an email, from a Hotmail account, to a technical person for assistance with the stolen software. This technical person instead contact Microsoft.

Recap

To recap, an internal Microsoft employee used a Microsoft internal VM and a Microsoft public file-sharing cloud to steal Microsoft assets.

He either really liked using Microsoft or knew that they would not notice him stealing.

The intended recipient of those assets also used a Microsoft public cloud email account to communicate with employee stealing software, as well as with a person friendly to Microsoft senior executives.

When All You Have is a Hammer

Microsoft missed several red flags. Their internal virtual environment as well as their public cloud clearly was not detecting a theft in progress. A poor-performance review could be tied to sensitivity of network monitoring, watching for movement of large assets or pointing to communication with other internal staff that may have been working on behalf of the employee. Absent more advanced detective capabilities, let alone preventive ones, someone like Alex moves freely across Microsoft resources to steal assets.

A 900-lb gorilla approach to this problem must have seemed like a good idea to someone in Microsoft management. I have heard people suggest a rogue legal staff member was driving the decisions, yet this doesn’t sound plausible.

Having worked with gigantic legal entities in these scenarios I suspect coordinated and top-down the investigation and legal teams. Ironically, perhaps the most damaging steps to customer trust might have been done by a team called Trustworthy Computer Investigations (TWCI). They asked the Office of Legal Compliance (OLC) for authorization to compromise customer accounts. That to me indicates the opposite of any rogue effort; it was a management-led mission based on an internal code-of-conduct and procedures.

Hotmail Broken

The real controversy should be that the TWCI target was not internal. Instead of digging around Microsoft’s own security logs and controls, looking at traces of employee activity for what they needed, Microsoft compromised a public customer Hotmail account (as well as a physical home) with the assistance of law enforcement in several countries. They found traces they were looking for in the home and Hotmail account; steps that explained how their software was stolen by an internal employee as well as signs of intent.

The moral of the story, unfortunately, seems to be Microsoft internal security controls were not sufficient on their own, in speed or cost or something else, which compelled the company to protect themselves with a rather overt compromise of customer privacy and trust. This naturally has led to a public outcry about whether anyone can trust a public cloud, or even webmail.

Microsoft, of course, says this case is the exception. They say they had the right under their service terms to protect their IP. These are hard arguments to dispute, since an employee stealing Microsoft IP and using Microsoft services, and even trying to sell the IP by contacting someone friendly with Microsoft, can not possibly be a normal situation.

On the other hand, what evidence do we have now that Microsoft would restrict themselves from treating public as private?

With that in mind, Microsoft has shown their hand; they struggle to detect or prevent IP-theft as it happens, so they clearly aim to shoot after-the-fact and as necessary. There seems to be no pressure to do things by any standard of privacy (e.g. one defined by the nationality of the customer) other than one they cook up internally weighted by their own best interests.

Note the explanation by their Deputy Counsel:

Courts do not issue orders authorizing someone to search themselves, since obviously no such order is needed. So even when we believe we have probable cause, it’s not feasible to ask a court to order us to search ourselves.

They appear to be defining customers as indistinguishable from Microsoft employees. If you are a Hotmail user, you are now a part of Microsoft’s corporate “body”. Before you send HR an email asking for healthcare coverage, however, note that they also distinguish Microsoft personal email from corporate email.

The only exception to these steps will be for internal investigations of Microsoft employees who we find in the course of a company investigation are using their personal accounts for Microsoft business.

So if I understand correctly Microsoft employees are allowed an illusion of distinguishing personal email on Hotmail from their business email, which doesn’t make any sense really because even public accounts on Hotmail are treated like part of corporate body. And there’s no protection from searches anywhere anyway. When Microsoft internal staff, and an external attorney they have hired, believe there is probable cause then they can search “themselves”.

And for good measure, I found a new Google statement that says essentially the same thing. They reserve the right to snoop public customer accounts, even journalists.

“[TechCrunch editor Michael Arrington] makes a serious allegation here — that Google opened email messages in his Gmail account to investigate a leak,” Kent Walker, Google general counsel, said in a statement. “While our terms of service might legally permit such access, we have never done this and it’s hard for me to imagine circumstances where we would investigate a leak in that way.”

Hard perhaps for Kent to imagine, but with nothing stopping them…is imagination really even relevant?

Back to Schneier

Given this story as background, I’d like to respond to Bruce Schneier’s excellent article with the long title: “Don’t Listen to Google and Facebook: The Public-Private Surveillance Partnership Is Still Going Strong

These companies are doing their best to convince users that their data is secure. But they’re relying on their users not understanding what real security looks like.

This I have to agree with. Reading the Microsoft story I first was shocked to hear they had cracked their own customer’s email account. Then after I read the details I realized they had probable cause and they followed procedures…until I reached the point where I realized there was nothing being said about real security. It begs a simple question:

Should the lack of Microsoft ability to detect or prevent a theft, utilizing their private and public services, a reasonable justification for very broad holes in customer terms-of-service?

Something Just Hit the Fan

Imagine you are sitting on a toilet in your apartment. That apartment was much more convenient to move into compared to building your own house. But then, suddenly the owner is standing over you. The owner says since they can’t tell when widgets are taken from their offices (e.g they can’t detect which of their employees might be stealing) and they have probable cause (e.g. someone says you were seen with a missing widget) they can enter your bathroom at any time to check.

Were you expecting privacy while you sat on your toilet in your apartment?

Microsoft clearly disagrees and says there’s no need to even knock since they’re entering their own bathroom…in fact, all the bathrooms are theirs and no-one should be able to lock them out. Enjoy your apartment stay.

Surveillance, Not Surveillance

Real security looks like the owners detecting theft or preventing theft in “their” space rather than popping “your” door open whenever they feel like it. I hate to say it this way but it’s a political problem, rather than a technical one: what guide should we use to do surveillance in places that are socially agreed-upon, such as watching a shared office to reduce risks of theft, rather than threaten surveillance in places people traditionally and reasonably expect privacy?

So here is where I disagree with Schneier

Google, and by extension, the U.S. government, still has access to your communications on Google’s servers. Google could change that. It could encrypt your e-mail so only you could decrypt and read it. It could provide for secure voice and video so no one outside the conversations could eavesdrop. It doesn’t. And neither does Microsoft, Facebook, Yahoo, Apple, or any of the others. Why not? They don’t partly because they want to keep the ability to eavesdrop on your conversations.

Ok, I actually sort of agree with that. Google could provide you with the ability to lock them out, prevent them from seeing your data. But saying they want to eavesdrop on your conversations is where I start to think differently from Bruce. They want to offer tailored services, marketing if you allow it. The issue is whether we must define an observation space for these tailored services as completely and always open (e.g. Microsoft’s crazy definition of everything as “self”) or whether there is room for privacy.

Give Me Private Cloud or Give Me Encryption…OK I’ll Take Both

Suddenly, and unexpectedly, I am seeing movement towards cloud encryption using private-keys unknown to the provider. Bruce says this is impossible because “the US government won’t permit it”. I disagree. For years I worked with product companies to create this capability and was often denied. But it was not based on some insidious back-door or government worry. Product managers had many reasons why they hated to allow encryption into the road-map and the most common was there simply was not enough demand from customers.

Ironically, the rise of isolated but vociferous demand actually could be the reason we now will see it happen. If Google and Apple move towards a private-key solution, even if only to fly the “we’re better than Microsoft flag,” only a fraction of users will adopt (there’s an unknown usability/cost factor here). And of those users that do adopt eagerly, what is the percentage that the government comes knocking for with a warrant or a subpoena to decrypt? Probably a high percentage, yet still a small population. Given that the cloud providers properly setup key management they should be able to tell the government they have no way to decrypt or access the data.

Economics to the Rescue

This means from a business view the cloud provider could improve their offering to customers by enhancing trust with privacy controls, while at the same time reducing a cost burden of dealing with government requests for data. It could be a small enough portion of the users it wouldn’t impact services offered to the majority of users. This balance also could be “nudged” using cost; those wanting enhanced privacy pay a premium. In the end, there would be no way a provider could turn over a key that was completely unknown to them. And if Bruce is right that the government gets in no matter what, then all the more reason for cloud providers to raise the bar above their own capabilities.

We should have been headed this way a long time ago but, as I’ve said, the product managers really did not believe us security folks when we begged, pleaded and even demanded privacy controls. Usability, performance and a list a mile long of priorities always came first. Things have changed dramatically in the past year and #Hotmailgate really shows us where we don’t want to go. I suspect Microsoft and its competitors are now contemplating whether and how to incorporate real private-key systems to establish better public cloud privacy options, given the new economic models and customer demands developing.