US to Follow Chinese Common Cell Power Rule

China was said to be trying to reduce waste when it passed a rule five years ago that all mobile devices must be charged by a common USB interface.

China, through YD/T 1591-2006 “Technical Requirements and Test Method of Charger and Interface for Mobile Telecommunication Terminal Equipment,” created a requirement that cell phones must be charged from a USB charger.

[…]

To converge all the external connection functionality onto a single USB interface, several problems need to be solved, including routing audio over the same interface as data, detecting what external accessories are connected, maintaining high performance for all devices, and keeping power low.

Europe followed in 2009. Now, despite Apple’s best efforts to deploy annoyingly proprietary interfaces, ComputerWorld says the US will follow the Chinese USB rule.

By January 2012, all U.S. cell phones will have a common micro-USB interface that will allow universal external power chargers to use the port, CTIA Chairman Dan Hesse announced at a keynote at CTIA here today.

The variety of charging ports used in cell phones and smartphones today has irritated American users for years, especially as Europe moved forward on a common micro USB interface for data devices.

Oh, that’s funny, ComputerWorld does not mention the Chinese at all.

I agree with their assessment of American irritation. One of the reasons I dumped every electronic device I owned with iPhone/iPod interface a couple years ago was because I moved to a USB-only rule at home and at work. I have been surprised to find hotel rooms and gyms with electronics that have Apple proprietary interfaces. It’s like seeing a treadmill with a Betamax slot.

Bottom line is that availability of power will go up while waste goes down with a rule like this. There are compromises in features and maybe even functionality, but availability improvements and waste reduction seem worth it to me.

MHTML Exploit Evolution and Warning

Michal Zalewski includes an important paragraph towards the end of his analysis of the MHTML exploit affecting Google users as well as “a significant proportion of all sensitive web applications on the Internet”.

Based on this 2007 advisory, it appears that a variant of this issue first appeared in 2004, and has been independently re-discovered several times in that timeframe. In 2006, the vendor reportedly acknowledged the behavior as “by design”; but in 2007, partial mitigations against the attack were rolled out as a part of MS07-034 (CVE-2007-2225). Unfortunately, these mitigations did not extend to a slightly modified attack published in the January 2011 post to the full-disclosure mailing list.

Great to see the evolution has been included in the discussion. That was one of my points in my BSidesSF presentation when I criticized Symantec and McAfee for their analysis of certain infamous incidents. It felt like it was becoming too easy for security analysts to push the ZOMG button instead of taking time to trace and explain the development path that attackers followed. Certain shortcuts taken by Microsoft in 2007, for example, could have been called out for leading to problems the following year and especially in 2009. Instead security research is often given incentives to make a discovery look unique (“Download our new ZOMG report now! Try our new anti-ZOMG product!”).

Risk happens. But security can do itself a great disservice if it does not try to take mistakes back to product management and convince them to apply a corrective/new risk formula for the future — breaches should be traced back to decisions whenever possible.

This attack is focused more on a server flaw than prior iterations, which seems innovative, but it still has a lineage.

In other words, if we become too hasty and label a breach as an amazing state-sponsored intelligence effort, we allow vendors to claim it was impossible to see and the cost of prevention too high to achieve (until now, with a handy new ZOMG tool); instead, they would be given less wiggle room and more responsibility for quality engineering if security teams provide a logical progression of warnings and opportunities that showed where they could have fixed the issue much earlier.

Kudos to Zalewski on that account. His post is excellent. I also really like his vendor-neutral recommendations and that he offers defensive suggestions for both server and client.

It appears that the affected sites generally have very little recourse to stop the attack: it is very difficult to block the offending input patterns perfectly, and there may be no reliable way to distinguish between MHTML-related requests and certain other types of navigation (e.g., loads). A highly experimental server-side workaround devised by Robert Swiecki may involve returning HTTP code 201 Created rather than 200 OK when encountering vulnerable User-Agent strings – as these codes are recognized by most browsers, but seem to confuse the MHTML fetcher itself.

Until the problem is addressed by the vendor through Windows Update, I would urge users to consider installing a FixIt tool released by Microsoft as an interim workaround.

How the Dutch WiFi Hacker Escaped Conviction

Technically he was convicted on a separate charge so he did not go free, but he charges against him for hacking into a WiFi network were dismissed. PC World gives the following explanation:

A computer in The Netherlands is defined as a machine that is used for three things: the storage, processing and transmission of data. A router can therefore not be described as a computer because it is only used to transfer or process data and not for storing bits and bytes. Hacking a device that is no computer by law is not illegal, and can not be prosecuted, the court concluded.

The prosecution had to prove the wireless router was used for storage, processing and transmission of data. That sounds not terribly hard to do (a router is used to store logs and route data, packets are processed and transmitted), but apparently they proved only one or two, not all three. Also, if the law had used the word “or” instead of “and” (storage, processing or transmission of data) the judge might have found a different result. The ruling was appealed.

SSB Radios and Revolution

Almost a decade ago when I sailed across an open stretch of the Pacific I was introduced to an inexpensive radio connected to a laptop to send email. Although our boat was thousands of miles from land for days, we had some comfort knowing a brief email message could be sent to friends and family at very little cost.

The technology we used was based on Single Sideband (SSB). The reason email can travel so far on the radio we had on the boat is because of efficiency. The standard radio broadcast transmitter with 4-kW can put only half its power to signal (2-kW) and then it splits power again between two sidebands (1-kW).


AM Sidebands

A single sideband does not bother with use of the carrier and the other sideband so it can put all power into a single sideband (thus the name) and improve efficiency (up to 16x) to carry speech longer distances. The lack of a carrier can make voices sound funny (even a slight frequency error will cause a frequency shift) but for text there is no noticeable difference. Just 1-kW on an SSB radio can reach the equivalent range of a 4-kW AM or FM transmitter.

This technology has been around since the early 1900s. It was in service by the 1930s (lawsuits over patents delayed adoption) to connect a public radiotelephone circuit between New York (via Rocky Point listening station) and London (via Rugby listening station). After WWII the US Strategic Air Command adopted SSB as the standard for long-distance transmissions in its new fleet of B-52 aircraft .

LeMay became aware of the successes of amateur SSB work, and in 1956 undertook two flights, one to Okinawa and the other to Greenland, during which SSB was put to the test using Amateur Radio gear and hams themselves.

It thus makes sense for sailboats to carry SSB for their long journeys over open spaces. The protocol that we used on the boat with the radio to encapsulate and process our POP3 email was developed by SailMail.

The SailMail system implements an efficient email transfer protocol that is optimized for use over communications systems that have limited bandwidth and high latency. Satellite communications systems and SSB-Pactor terrestrial radio communications systems both have these characteristics. The SailMail email system’s custom protocol substantially reduces the number of link-turn-arounds and implements compression, virus filtering, spam filtering, and attachment filtering. The combination of the protocol, compression, and filtering dramatically improves communications efficiency.

Efficiency, efficiency…and point-to-point long distance communication. Does this remind you of another environment? Recently I have been reading about the challenges of communication in the revolutions in Ivory Coast, Bahrain, Saudi Arabia, Egypt, Libya…. The military, media or intelligence communities must be developing software for inexpensive USB SSB radios and laptops to stay in contact with groups inside those countries.

The Israelis have certainly documented various radio communication devices available to the Hizbullah including scanners and receivers set to monitor helicopter frequencies. Yet the Philippine government seems to suggest that SSB radios are difficult to obtain.

“However, the embassy cannot provide all the needed information [chassis number, model, etc.] since the post still does not have the radio transceiver units,” [Philippine Ambassador to Syria Wilfredo] Cuyugan said, emphasizing the need for the Foreign Affairs department to purchase at least one unit of HF SSB radio transceivers and five units of very high or ultra high frequency long range handheld radio transceivers.

The technology seems like a good fit. It may be less common than cell phones and consumer wireless or microwave options, yet some Tweets, email or other messages must now be escaping via SSB radio.