Category Archives: Security

VMworld 2010 Recap: SSD Security

Themes I picked up at VMworld this year:

  • Difference between security and compliance
  • Mixed-mode compliance (burden of proof and segmentation)
  • Guidelines for compliance practices in cloud
  • Encryption with key management
  • Tokens for cloud to resolve latency between web in cloud and DB in private
  • Automation risks (HAL9000)
  • Forensics of cloud

Each one of those is a fun topic in and of itself. I could certainly write a book about securing the cloud, or the virtual environment, at this point.

One very interesting detail also caught my attention. I walked into the Kingston booth and saw them promoting SSD technology.

Never one to shy away from the opportunity to ask about security I found the most technical staff person in the booth and peppered him with questions. Specifically I asked about the difference in forensics, given the lack of a spinning disk.

It very quickly became apparent that a secure wipe of a SSD was IMPOSSIBLE. The Kingston tech told me point-blank that as pages in the disk where data is written become unstable, the disk marks them and then stops writing to them. The data is simply left behind.

I asked Kingston if they would release a tool to wipe those tombstoned pages and they said no. I asked if they would release a tool to read them. No. They will simply sit there with data; without manufacturer support there is no way for the disk owner to get at those dead pages of data.

Then I asked if it would be possible for me to connect directly to flash chips to audit for data or measure the success of a secure-wipe. The answer, as expected, was no. And for good measure the Kingston rep told me they OEM chips (commodity-based or market-based storage on the boards) so not only would I be unable to get help for bypassing a controller and direct access to chips…the chips would probably be different from SSD to SSD.

OK, then.

The good news for privacy is that when an SSD dies it is really dead and very difficult to recover. It may be nearly impossible without expensive tools to recover the data, unlike recovery from spinning disks.

The bottom line is that flash-based “disks” such as SSD use a completely new internal architecture. Our old drives used magnetic storage “platters” and we have a long history of reading and recovering memory locations even if we have to pull out the platters and play them like records.

SSD disks of non-moving flash chips soldered to a board, however, have a whole new setup and aren’t susceptible to our old magnetic tricks (don’t even try degaussing).

Instead of an arm moving along the platters, an SSD has a FTL (flash translation layer) to put data down, move it around and keep track of it. This layer is proprietary and means directly writing or modifying storage is impossible, for example, without first having the FTL erase a space. If the FTL decides a “page” isn’t worth writing to anymore because of wear and tear, then that page is set aside despite still having data on it.

Interesting to note that wear and tear is a serious problem with SSD. The pages have only so much life to them, unlike our old magnetic platters that seem to last infinitely. So running a “wipe” command actually can reduce the life of the disk. Trying to defrag a SSD (totally useless because there is no platter/arm speed issue) therefore just reduces the life of the disk by increasing wear and tear on it.

What this all means, really, is that future storage systems will have to rely more and more on encryption for data privacy. Traditional methods such as multi-pass erase/wipe (meant to ensure all paths of the head hit the platter) are now irrelevant. The consumer/owner of the storage simply will not be able to secure erase based on guidance from the past. Instead, erasing a key will make all the encrypted data on the disks no longer readable.

From the smallest SSD to the largest array of storage, encryption is the future with the focus of secure-wipe procedures now left only for the key.

Google Blames Vulnerability Report Error on Compilers

The Google Online Security Blog has posted an interesting update in response to an IBM 2010 Risk Report.

…we were confused by a claim that 33% of critical and high-risk bugs uncovered in our services in the first half of 2010 were left unpatched. We learned after investigating that the 33% figure referred to a single unpatched vulnerability out of a total of three — and importantly, the one item that was considered unpatched was only mistakenly considered a security vulnerability due to a terminology mix-up. As a result, the true unpatched rate for these high-risk bugs is 0 out of 2, or 0%.

IBM has an updated chart now. Although one can see how Google might take such a sensitive and defensive position when confronted with vulnerability data, their analysis comes across as shockingly one-sided.

They first highlight four “factors working against [vulnerability databases]”. All have a clear tone of “don’t trust those databases” but only one says the vendors have an important role — disclosure in consistent formats. The finger-pointing then goes a step further with two suggestions:

To make these databases more useful for the industry and less likely to spread misinformation, we feel there must be more frequent collaboration between vendors and compilers. As a first step, database compilers should reach out to vendors they plan to cover in order to devise a sustainable solution for both parties that will allow for a more consistent flow of information. Another big improvement would be increased transparency on the part of the compilers — for example, the inclusion of more hard data, the methodology behind the data gathering, and caveat language acknowledging the limitations of the presented data.

I think calling the report misinformation is a bit harsh. Their post only says databases are not to be trusted because the “compilers” do not reach out and are not transparent enough. That should be a two-way commentary. There is no need to place all blame on database researchers and none on vendors like Google. Google could publish more patch information and transparency with regard to its recorded vulnerabilities. They could lead by example, of course, and fix their their security communication and management issues, especially around consistency. That might be the third, but most important, step to make these databases more useful.

Social Networks Fool InfoSec Pros

BitDefender says they have a survey that shows over 30% of users who accepted a friendship with a bogus profile are in the IT Security industry.

Although it would be cool to jump into this statistic, I do not see any analysis or data on the users that proves they were not faking their own profile.

Turnabout is fair play, no? How much of this information that BitDefender collected is real?

The study sample group included 2,000 users from all over the world registered on one of the most popular social networks. These users were randomly chosen in order to cover different aspects: sex (1,000 females, 1,000 males), age (the sample ranged from 17 to 65 years with a mean age of 27.3 years), professional affiliation, interests etc. In the first step, the users were only requested to add the unknown test profile as their friend, while in the second step several conversations with randomly selected users aimed to determine what kind of details they would disclose.

Ironic that they would assume it can be trusted. Or did they verify? The complete 400K report does not give any verification of the survey group, so maybe we can assume they also could have been duped while they were trying to dupe others. The closest thing I found was this note:

These outcomes were tested against the motivation of IT security industry users to become friends with the blonde girl, in order to ensure that they didn’t accept the friendship request just to have “study material” for their own research.

That means they asked the person they were trying to befriend for their motivation; 53% said “a lovely face” was their reason to accept the girl. Was this a game response or sincere? I don’t see it as validation.

The experiment revealed that the most vulnerable users appeared to be those that worked in the IT industry: after a half an hour conversation, 10% of them disclosed to “the blonde face” personal sensitive information such as: address, phone number, mother’s and father’s name, etc — information usually used in recovery passwords questions. In addition to that, after a 2 hour conversation, 73% revealed what appears to be confidential information from their work place, such as future strategies, plans, and unreleased technologies/software.

Two hour conversation with a fake profile. That’s impressive but I still would like to see validation results. I mean what percentage of those claiming to work in IT were proven/verified to actually work in IT. Did they divulge real or fake information? When a study begins with a premise that you can easily fool people online, it would seem logical to then proceed with caution and not believe everything a new contact might say.