Securely Erase Your SSD

I presented on secure erasure of SSD in August 2010 at VMworld. This topic was top of mind for me after I met last summer with representatives from Kingston. I explained to my audience the major problem with SSD and compliance: secure wipe technology no longer works. The cloud providers often use large arrays of SSD thus unlikely cloud meet regulatory requirements.

A failure of secure wipe tools is only the beginning of the problem. The SSD drives are actually designed to set-aside pages of data when they fail — no longer write data to them. But data can still be read from them. The procedure to set them aside does not include any secure destruction of their contents.

When I brought it to the attention of Kingston they said no one had ever mentioned it before to them as a security concern and they would have to do research. They told me they would look into it further and get back to me. I did not hear from them but continued my research into SSD security and presented my updated findings in September 2010 at the HTCIA conference.

These two presentations were well received (large audiences with high ratings) but they also seem to have been able to influence other presentations and research.

I was told that my toaster time-line slide, for example, which was used to illustrate security technology evolution, may have been an inspiration to a toilet timeline (presented at the recent 2011 Cloud Security Alliance Summit at RSA). I don’t see the connection from toasters to toilets. The time-lines seem very different to me in purpose and effect, but who am I to argue with CSA toilet humor.

Now I have heard, to my further surprise, a paper has just been released via USENIX similar to the issue I raised last year with Kingston. Perhaps it was with the assistance of Kingston. Maybe this group discovered it independently. I suspect the former because of some of the details:

Our results lead to three conclusions: First, built-in commands are effective, but manufacturers sometimes implement them incorrectly. Second, overwriting the entire visible address space of an SSD twice is usually, but not always, sufficient to sanitize the drive. Third, none of the existing hard drive-oriented techniques for individual file sanitization are effective on SSDs.

These researchers benefit from an incredibly vague proposal for $2,413,200 in federal funds. Just about anything in the world could fit into the language written into their NSF grant. It is interesting to note in their grant they fail to make any mention of the word security or make any reference in the problem statement to a need for privacy or controls.

The Variability Expedition fundamentally rethinks the rigid, deterministic hardware-software interface, to propose a new class of computing machines that are not only adaptive but also highly energy efficient. These machines will be able to discover the nature and extent of variation in hardware, develop abstractions to capture these variations, and drive adaptations in the software stack from compilers, runtime to applications. The resulting computer systems will work and continue working while using components that vary in performance or grow less reliable over time and across technology generations. A fluid software-hardware interface will thus mitigate the variability of manufactured systems and make machines robust, reliable and responsive to the changing operating conditions.

A good study in how to write grant proposals.

I know I was not the first to think about clear-text exposures in SSD because this year I found standards already underway to address SSD data residue. Perhaps I only was the first to describe the problem to an open and widespread audience (as confirmed by Kingston), and maybe the first one to relate it to large enterprise and cloud environments (instead of laptops, media players, etc.). The paper by the grant-based research team confirms part of what I had described and now brings it to an even wider audience but unfortunately they are a tad late.

Other research, done by actual digital forensic investigators who clearly understand the data residue issue, has a more interesting twist. A study in Australia found that SSD drives have started to do automatic garbage collection to improve their speed; they end up preventing forensic analysis. When I spoke with Kingston they told me that the performance of SSD depends upon regular maintenance but user intervention was expected and required. They pointed me to a program I had to run to keep the drive tables clean. The latest SSD now initiate these performance operations on their own, even without any interaction.

Our experimental findings demonstrate that solid-state drives (SSDs) have the capacity to destroy evidence catastrophically under their own volition, in the absence of specific instructions to do so from a computer.

[…]

If garbage collection were to take place before or during forensic extraction of the drive image, it would result in irreversible deletion of potentially large amounts of valuable data that would ordinarily be gathered as evidence during the forensic process – we call this ‘corrosion of evidence’. But this is only the first problem. The second problem is that any alteration to the drive during or after extraction may make validation of evidence difficult.

[…]

In all three SSD runs, around 160 seconds from the log-in time (i.e. around 200 seconds from power-on), the SSD begins to wipe the drive. After approximately 300 seconds from log-in, the SSD consistently appears to pause briefly before continuing. 350 seconds after log-in, the SSD’s pre-existing evidence data has been almost entirely wiped. In contrast, the HDD controller does not purge the drive.

In other words, the problem I posed to my audiences last year will soon be irrelevant thanks to the automation of SSD performance routines. It will become even less of a problem if secure wipe standards are created and adopted for SSD. The key to the Australian research is that it shows data was un-recoverable despite attaching the SSD to a write-blocker. The SSD self-initiated process wiped out almost 20 percent of its data in 20 minutes, faster than it could be duplicated/extracted.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.