“Give Me 3” passing rule in CA

LA Mayor Villaraigosa has unveiled a “Give Me 3” Bike Safety Poster

The Mayor also announced that he would like to “make the 3 Foot Passing Rule a 3 Foot Passing Law” in California. He will be introducing the bill, going to Sacramento and working with the bicycling community to ensure that this becomes a reality. “We’ll keep at it until it becomes part of the California Vehicle Code.”

LA has to be one of the most bike unfriendly cities anywhere. When I lived there many years ago it was common for bike lanes to end abruptly at the intersection with eight lanes of freeway, and no way to get across. Apparently the very first LA Bicycle Summit was just held this year. Excellent to see them take (three?) steps to at least make bicycling safer.

SmartMeters Run Into Santa Cruz Resistance

The Indybay says Protesters Halt Smart Meter Installation in Santa Cruz County

Heidi Bazzano, one of the protesters at 38th and Portola this morning, said, “there are so many problems with ‘smart’ meters. PG&E, the government, and any hacker worth his salt will know when you wake up, what appliances you use, when you go on vacation. The meters overcharge people, increase carbon emissions, expose us to EMF which is a confirmed carcinogen, and worst of all, we’re paying for them through hikes in our electric rates!”

“One of the protesters” is not exactly a qualified opinion. And their description of a hacker sounds a lot like the bogeyman or Santa Claus rather than a real threat. Watch out, he knows when you have been bad or good…this makes the protester sound uninformed. Confirmed carcinogen? Confirmed where?

Those who are electrically sensitive have reported that the intense bursts of radiation from ‘smart’ meters are amongst the worst they have ever experienced. People throughout the state have been reporting headaches, nausea, dizziness, sleep disruption and other health impacts after smart meters are installed. PG&E has declined to remove the new meters even though they are causing adverse health impacts, leading some local residents to flee the state and stay with relatives. Some have even been forced into homelessness, living in their cars with the hope that their smart meter will be removed.

The health risks still all sound theoretical. Some might correlate smart meters to general health issues but where are the audits, studies or tests that prove causation? A placebo test or control group study would be interesting. I can understand an opposition to meters after billing mistakes are caught by auditors. This problem was documented and proven. I do not understand the vague health argument.

Indybay does not offer insights. They link instead to StopSmartMeters, which gives only more vague references, laced with heavy-handed sarcasm.

PG&ESE: “A SmartMeter device transmits relatively weak radio signals, resembling those of many other devices we use every day, like cell phones and baby monitors. A major radio station, by contrast, usually transmits with 50,000 times as much power.”

English Translation: “A DumbMeter device transmits relatively weak radio signals compared with your microwave oven (which we initially asked the FCC for permission to install but we realized that humans who are cooked like hot dogs have trouble authorizing a debit account). We’ll conveniently neglect to mention that cell phone and baby monitor wireless technologies have been implicated in brain tumors and other nasty lethal ailments, trusting that the public’s ignorance of wireless impacts will hold out long enough for us to finish installation.”

First, this is a counter-point to the entire argument. It says the SmartMeter company is motivated to do no harm because they need consumers to be healthy enough to pay bills. That could be the end of their protest right there.

Second, the style reads to me like a story from The Onion. I might think the site is a hoax except for links to real news stories about City Councils considering whether to block installation.

Are Councils and local government driven by fear more than any evidence of risk? An article in SFGate says this is very likely.

Of all the complaints filed with PG&E, 16 percent came from customers who did not yet have a smart meter, Burt said. In other words, they couldn’t be reacting to a mechanical problem with the meter.

Another bit of evidence suggests that fears rather than malfunctions drive at least some of the complaints. The Sacramento Municipal Utility District gets more customer complaints about its own smart meters following newspaper or television stories about PG&E’s meters. That includes stories about the meters’ accuracy as well as complaints that the wireless devices could pose a health risk – an idea that PG&E strenuously rejects.

“Whenever we see a spike in stories about PG&E’s smart meters, we see a spike in complaints,” said SMUD spokesman Chris Capra.

What happens when there is a spike in stories about stories about PG&E smart meters?

VMworld 2010 Recap: SSD Security

Themes I picked up at VMworld this year:

  • Difference between security and compliance
  • Mixed-mode compliance (burden of proof and segmentation)
  • Guidelines for compliance practices in cloud
  • Encryption with key management
  • Tokens for cloud to resolve latency between web in cloud and DB in private
  • Automation risks (HAL9000)
  • Forensics of cloud

Each one of those is a fun topic in and of itself. I could certainly write a book about securing the cloud, or the virtual environment, at this point.

One very interesting detail also caught my attention. I walked into the Kingston booth and saw them promoting SSD technology.

Never one to shy away from the opportunity to ask about security I found the most technical staff person in the booth and peppered him with questions. Specifically I asked about the difference in forensics, given the lack of a spinning disk.

It very quickly became apparent that a secure wipe of a SSD was IMPOSSIBLE. The Kingston tech told me point-blank that as pages in the disk where data is written become unstable, the disk marks them and then stops writing to them. The data is simply left behind.

I asked Kingston if they would release a tool to wipe those tombstoned pages and they said no. I asked if they would release a tool to read them. No. They will simply sit there with data; without manufacturer support there is no way for the disk owner to get at those dead pages of data.

Then I asked if it would be possible for me to connect directly to flash chips to audit for data or measure the success of a secure-wipe. The answer, as expected, was no. And for good measure the Kingston rep told me they OEM chips (commodity-based or market-based storage on the boards) so not only would I be unable to get help for bypassing a controller and direct access to chips…the chips would probably be different from SSD to SSD.

OK, then.

The good news for privacy is that when an SSD dies it is really dead and very difficult to recover. It may be nearly impossible without expensive tools to recover the data, unlike recovery from spinning disks.

The bottom line is that flash-based “disks” such as SSD use a completely new internal architecture. Our old drives used magnetic storage “platters” and we have a long history of reading and recovering memory locations even if we have to pull out the platters and play them like records.

SSD disks of non-moving flash chips soldered to a board, however, have a whole new setup and aren’t susceptible to our old magnetic tricks (don’t even try degaussing).

Instead of an arm moving along the platters, an SSD has a FTL (flash translation layer) to put data down, move it around and keep track of it. This layer is proprietary and means directly writing or modifying storage is impossible, for example, without first having the FTL erase a space. If the FTL decides a “page” isn’t worth writing to anymore because of wear and tear, then that page is set aside despite still having data on it.

Interesting to note that wear and tear is a serious problem with SSD. The pages have only so much life to them, unlike our old magnetic platters that seem to last infinitely. So running a “wipe” command actually can reduce the life of the disk. Trying to defrag a SSD (totally useless because there is no platter/arm speed issue) therefore just reduces the life of the disk by increasing wear and tear on it.

What this all means, really, is that future storage systems will have to rely more and more on encryption for data privacy. Traditional methods such as multi-pass erase/wipe (meant to ensure all paths of the head hit the platter) are now irrelevant. The consumer/owner of the storage simply will not be able to secure erase based on guidance from the past. Instead, erasing a key will make all the encrypted data on the disks no longer readable.

From the smallest SSD to the largest array of storage, encryption is the future with the focus of secure-wipe procedures now left only for the key.

Google Blames Vulnerability Report Error on Compilers

The Google Online Security Blog has posted an interesting update in response to an IBM 2010 Risk Report.

…we were confused by a claim that 33% of critical and high-risk bugs uncovered in our services in the first half of 2010 were left unpatched. We learned after investigating that the 33% figure referred to a single unpatched vulnerability out of a total of three — and importantly, the one item that was considered unpatched was only mistakenly considered a security vulnerability due to a terminology mix-up. As a result, the true unpatched rate for these high-risk bugs is 0 out of 2, or 0%.

IBM has an updated chart now. Although one can see how Google might take such a sensitive and defensive position when confronted with vulnerability data, their analysis comes across as shockingly one-sided.

They first highlight four “factors working against [vulnerability databases]”. All have a clear tone of “don’t trust those databases” but only one says the vendors have an important role — disclosure in consistent formats. The finger-pointing then goes a step further with two suggestions:

To make these databases more useful for the industry and less likely to spread misinformation, we feel there must be more frequent collaboration between vendors and compilers. As a first step, database compilers should reach out to vendors they plan to cover in order to devise a sustainable solution for both parties that will allow for a more consistent flow of information. Another big improvement would be increased transparency on the part of the compilers — for example, the inclusion of more hard data, the methodology behind the data gathering, and caveat language acknowledging the limitations of the presented data.

I think calling the report misinformation is a bit harsh. Their post only says databases are not to be trusted because the “compilers” do not reach out and are not transparent enough. That should be a two-way commentary. There is no need to place all blame on database researchers and none on vendors like Google. Google could publish more patch information and transparency with regard to its recorded vulnerabilities. They could lead by example, of course, and fix their their security communication and management issues, especially around consistency. That might be the third, but most important, step to make these databases more useful.