Securely Erase Your SSD

I presented on secure erasure of SSD in August 2010 at VMworld. This topic was top of mind for me after I met last summer with representatives from Kingston. I explained to my audience the major problem with SSD and compliance: secure wipe technology no longer works. The cloud providers often use large arrays of SSD thus unlikely cloud meet regulatory requirements.

A failure of secure wipe tools is only the beginning of the problem. The SSD drives are actually designed to set-aside pages of data when they fail — no longer write data to them. But data can still be read from them. The procedure to set them aside does not include any secure destruction of their contents.

When I brought it to the attention of Kingston they said no one had ever mentioned it before to them as a security concern and they would have to do research. They told me they would look into it further and get back to me. I did not hear from them but continued my research into SSD security and presented my updated findings in September 2010 at the HTCIA conference.

These two presentations were well received (large audiences with high ratings) but they also seem to have been able to influence other presentations and research.

I was told that my toaster time-line slide, for example, which was used to illustrate security technology evolution, may have been an inspiration to a toilet timeline (presented at the recent 2011 Cloud Security Alliance Summit at RSA). I don’t see the connection from toasters to toilets. The time-lines seem very different to me in purpose and effect, but who am I to argue with CSA toilet humor.

Now I have heard, to my further surprise, a paper has just been released via USENIX similar to the issue I raised last year with Kingston. Perhaps it was with the assistance of Kingston. Maybe this group discovered it independently. I suspect the former because of some of the details:

Our results lead to three conclusions: First, built-in commands are effective, but manufacturers sometimes implement them incorrectly. Second, overwriting the entire visible address space of an SSD twice is usually, but not always, sufficient to sanitize the drive. Third, none of the existing hard drive-oriented techniques for individual file sanitization are effective on SSDs.

These researchers benefit from an incredibly vague proposal for $2,413,200 in federal funds. Just about anything in the world could fit into the language written into their NSF grant. It is interesting to note in their grant they fail to make any mention of the word security or make any reference in the problem statement to a need for privacy or controls.

The Variability Expedition fundamentally rethinks the rigid, deterministic hardware-software interface, to propose a new class of computing machines that are not only adaptive but also highly energy efficient. These machines will be able to discover the nature and extent of variation in hardware, develop abstractions to capture these variations, and drive adaptations in the software stack from compilers, runtime to applications. The resulting computer systems will work and continue working while using components that vary in performance or grow less reliable over time and across technology generations. A fluid software-hardware interface will thus mitigate the variability of manufactured systems and make machines robust, reliable and responsive to the changing operating conditions.

A good study in how to write grant proposals.

I know I was not the first to think about clear-text exposures in SSD because this year I found standards already underway to address SSD data residue. Perhaps I only was the first to describe the problem to an open and widespread audience (as confirmed by Kingston), and maybe the first one to relate it to large enterprise and cloud environments (instead of laptops, media players, etc.). The paper by the grant-based research team confirms part of what I had described and now brings it to an even wider audience but unfortunately they are a tad late.

Other research, done by actual digital forensic investigators who clearly understand the data residue issue, has a more interesting twist. A study in Australia found that SSD drives have started to do automatic garbage collection to improve their speed; they end up preventing forensic analysis. When I spoke with Kingston they told me that the performance of SSD depends upon regular maintenance but user intervention was expected and required. They pointed me to a program I had to run to keep the drive tables clean. The latest SSD now initiate these performance operations on their own, even without any interaction.

Our experimental findings demonstrate that solid-state drives (SSDs) have the capacity to destroy evidence catastrophically under their own volition, in the absence of specific instructions to do so from a computer.

[…]

If garbage collection were to take place before or during forensic extraction of the drive image, it would result in irreversible deletion of potentially large amounts of valuable data that would ordinarily be gathered as evidence during the forensic process – we call this ‘corrosion of evidence’. But this is only the first problem. The second problem is that any alteration to the drive during or after extraction may make validation of evidence difficult.

[…]

In all three SSD runs, around 160 seconds from the log-in time (i.e. around 200 seconds from power-on), the SSD begins to wipe the drive. After approximately 300 seconds from log-in, the SSD consistently appears to pause briefly before continuing. 350 seconds after log-in, the SSD’s pre-existing evidence data has been almost entirely wiped. In contrast, the HDD controller does not purge the drive.

In other words, the problem I posed to my audiences last year will soon be irrelevant thanks to the automation of SSD performance routines. It will become even less of a problem if secure wipe standards are created and adopted for SSD. The key to the Australian research is that it shows data was un-recoverable despite attaching the SSD to a write-blocker. The SSD self-initiated process wiped out almost 20 percent of its data in 20 minutes, faster than it could be duplicated/extracted.

Chicken Littlestux is Falling

Stuxnet has shown up in CSO magazine with a fingers-scratching-on-chalkboard title:

If Stuxnet was cyberwar, is U.S. ready for a response?

Interesting question. Why should we consider Stuxnet cyberwar? No analysis provided in the article. In the same vein we might as well ask if Stuxnet was water soluble, is the US ready to drink it? If Stuxnet was mixed into oatmeal, is the US ready to taste it?

Then comes the CSO article teaser:

The complex Stuxnet worm proved attacks on SCADA and other industrial control systems were possible. Are we ready if one comes our way?

First, I would not call Stuxnet complex, as I have written and presented many times. The attack was arguably complex, but Stuxnet not so much. I suppose we also could debate the meaning of the word complex but even Langner (who first discovered it) says Stuxnet was a simple and not well-written exploit.

Stuxnet attack very basic. DLL on Windows was renamed and replaced with new DLL to get on embedded real-time systems (controller). It was not necessary to write good code because of the element of surprise — only had to work pretty well

Second, it did not prove attacks on SCADA and other control systems are possible. It was well-known in the late 90s, as demonstrated by US Executive Order 13231 of October 16, 2001 “Critical Infrastructure Protection in the Information Age”, as well as Executive Order 13284 on January 23, 2003. In my BSidesSF presentation I explained the controversy Mudge started in 1999 when he told the press he could shut down 30 grids. So, from the “sophisticated” Maroochy Shire attack in 2000 to the “sophisticated” Aurora attack in 2007…there have been many proofs before Stuxnet.

Third, we know of reliability issues and failures already in control systems. I pointed out in my BSidesSF presentation three shutdowns of major nuclear stations in the US Northeast in early 2011. The question “are we ready” can be answered in the present tense for threats instead of a hypothetical. We know, for example, why more than 50 power plants were knocked offline in Texas recently. They were unprepared for threat conditions to their availability, despite forecasts. Moreover, the Governor of that state showed exceptionally poor judgment and a lack of situational awareness in his response.

Speaking of “ifs”, I am reminded of a Will Rogers quote:

If stupidity got us into this mess, then why can’t it get us out?

The CSO article would be far better if it tried to explain why, after more than ten years of warnings, critical infrastructure in America is still so susceptible to failure. Proverbs about chickens come to mind. Why is Stuxnet being phrased with terms of (sky-is-falling) cyberwar? Is that the most appropriate way to get a response from management?

Here is how I would have put the question: if we called Stuxnet the same kind of threat that we have been tracking and known about for years, albeit executed more carefully, would US critical infrastructure be any better prepared than they have been for lesser threats that seem to knock them offline?

Snap Judgement: Marc Bamuthi Joseph

Snap Judgement has posted their “first ever LIVE show! No notes, no do-overs, no safety net–the nation’s best storytellers join host Glynn Washington and rock San Francisco’s Brava Theatre”

In an amazing performance, Marc Bamuthi Joseph uses his many gifts to transport an entire live audience from San Francisco to the heart of Africa. National Poetry Slam champion, Broadway veteran, GOLDIE award winner, Marc is also an inaugural recipient of the United States Artists Rockefeller Fellowship which annually recognizes 50 of the country’s “greatest living artists.”

“The first African American woman that I ever met was a white chick…from Lubbock, Texas…”

Excellent stories based on themes of identity and trust — an American who is black turns to the white African American for help with a trip to Africa.

When most people hear the term “African” they immediately are biased towards a particular image of a person. An attacker leverages preconceived notions like “African” to manipulate and engineer responses from a victim. Our presentations for the past eight years, based on research and linguistic analysis of 419 scams, have illustrated how bias sits at the root of our vulnerability to attacks like fraud.

“Urgent/Confidential — An Appeal for your Serious and Religious Assistance”, National Association for Ethnic Studies (NAES) Conference, April 2003