Stuxnet: Anatomy of a Virus Sensational Video

I disagree with about 90% of this video, and find it annoying that they do not cite references — who says there were 20 zero-days? There were only 4, and even that is debatable, as I’ve said before. It’s a shining example of how speculation has filtered its way into to fodder for sensational videos.

Oooh, scary.

I do not understand how they can avoid mentioning that the guy who is credited with having the most detailed and first knowledge of Stuxnet — Ralph Langner — calls it “very basic”. He even explains how antivirus company researchers, infamous for hyping the threat, are wrong in their analysis.

Stuxnet attack very basic. DLL on Windows was renamed and replaced with new DLL to get on embedded real-time systems (controller). It was not necessary to write good code because of the element of surprise — only had to work pretty well

Nate Lawson gives probably the best and more authoritative explanation of Stuxnet available anywhere, which also contradicts the scary video. Unfortunately, he made a major marketing mistake. He called his blog post “Stuxnet is embarrassing, not amazing“. It’s a post with a modest and realistic view of the code.

Rather than being proud of its stealth and targeting, the authors should be embarrassed at their amateur approach to hiding the payload. I really hope it wasn’t written by the USA because I’d like to think our elite cyberweapon developers at least know what Bulgarian teenagers did back in the early 90′s.

What he should have called it was something like “What the next Stuxnet will look like” or “How Stuxnet could be 100x more powerful”. That would have given him the same level of buzz or even more than the nonsense peddled in the above video.

And what this video should have said is that Iran was infected by a low-grade attack because they had poor security management practices and were compromised by an insider. I mean what are the chances that the nuclear program would have succeeded anyway, given that maintenance failures and rust in thousands of centrifuges also was causing them problems? Or to put it the other way, what are the chances that a high-rate of failure of centrifuges was unanticipated, as explained by the Institute for Science and International Security (ISIS).

The destruction of 1,000 out of 9,000 centrifuges may not appear significant, particularly since Iran took steps to maintain and increase its LEU production rates during this same period. […] One observation is that it may be harder to destroy centrifuges by use of cyber attacks than often believed.

Although the attack was well planned and targeted to exploit a specific set of issues, it leveraged weak and known-bad controls such as unnecessary services, poor isolation/segmentation and no host-based monitoring. It is truly scary too see over and over again (for more than 10 years now) that nuclear energy companies rely on obfuscation and self-assessment more than a set of security best-practices to address risks. Calling Stuxnet sophisticated gives the Iranians far too much credit for their defences and just plays into the hand of those who want to escalate international political conflict.

How Cloud Will Kill Everything Before It

I like this analysis from 2009 by Jim Starkey, President of NimbusDB. It’s buried towards the end of a long post by Todd Hoff at High Scalability.

I’ve probably said this before, but the cloud is a new computing platform that some have learned to exploit, others are scrambling to master, but most people will see as nothing but a minor variation on what they’re already doing. This is not new. When time sharing as invented, the batch guys considered it as remote job entry, just a variation on batch. When departmental computing came along (VAXes, et al), the timesharing guys considered it nothing but timesharing on a smaller scale. When PCs and client/server computing came along, the departmental computing guys (i.e. DEC), considered PCs to be a special case of smart terminals. And when the Internet blew into town, the client server guys considered it as nothing more than a global scale LAN. So the batchguys are dead, the timesharing guys are dead, the departmental computing guys are dead, and the client server guys are dead. Notice a pattern?

The first thing I notice is that he does not give any examples of who has died. What defines death? Lack of developer interest? Absence of training? How should we measure the death toll?

I work with guys every day who started in batch, timesharing, departmental, client/server…they are not dead. They might be curmudgeonly but they definitely are far from dead. They are found in and among the talent developing the next generation technology. Ok, the batch guys are starting to die, but that’s from age.

Even the equipment from batch, timesharing, departmental, client/server is still in use…anything but dead. From a VAX running mission-critical military operations or a radiological system to a Tandem or mainframe keeping banks on-line, the technology is still alive.

Second, “death” is not caused by superior technology. DEC not only considered PCs to be a special case of smart terminals they influenced its very design and they (eventually) made some amazingly great PCs. The very first microprocessor, the Intel 4004, used the DEC VAX as the model. Microsoft’s Windows NT copied DEC VMS as its model. They copied the prior technology and made minor changes because that is less risky than a completely new start. Technology was NOT the deciding factor in DEC’s demise or “death”.

Companies that won in competition did not have to sell the best technology. Microsoft comes immediately to mind in software. Dell did not make the best PC when it blew up in the early 1990s. Even Intel has not had the best processor. The success of technology in America comes from extensive marketing to leadership companies that have money to spend and are willing to change partners. It’s a shift in group-think that comes from decisions outside of, rather than because of, technology factors — new license models, different support terms, etc.. On top of the financial and relationship factors are the social ones. If everyone does something, then quality doesn’t matter as much to some because quantity is safety. This is the same as when you let a popularity crowd tell you what to buy — you need a new pair of pants because the one you are wearing is no longer popular. Pants go “dead” by a very particular definition of fashion, regardless of fit or capability.

I am reminded of the time I asked a security director at a critical infrastructure company the criteria she used to buy products. “I call all the other companies in our industry and ask them what they bought” she told me without a blink in her eye. She had the “no one was ever fired for buying IBM” sensibility — buy what everyone is buying and they can’t blame the technology faults on you.

Third, Jim Starkey’s approach to history would lead us astray if we get stuck on a simple life/death paradigm instead of digging into the vaguer steps to evolution and adaptability. He could have us believing that all restaurants died when the McDonald’s hamburger reached some magical market penetration number. 8 billion served? 10 billion served? When did life really begin for them and kill all other prior forms of cuisine? Instead it makes far more sense to look at McDonalds, the giant service provider, as a continuously evolving company that has to stay ahead of competition from smaller more innovative restaurants. Eating a salad, and healthy food, is really just a minor change. It could have “killed” McDonalds but instead they started marketing themselves as a healthy food provider. IBM, likewise, has continuously evolved to re-brand themselves from batch processing all the way through to cloud.

An evolutionary model is about adaptability to the change in group consumption. IBM, Microsoft, HP, AT&T…there are many, many examples of companies that have survived for many years (back to my first point). The death of DEC (and Wang for that matter) came as a function of a lack of change in messaging; a historically naive belief in the loyalty and momentum of group-based consumption. DEC would have survived if they had followed the minor changes in technology with more competitive and offensive responses. They should have beaten every competitor at their own game as well as continuing to dominate their own. Instead they rested on the notion that they had what people wanted, rather than fighting to convince people that they still had something worth buying.

Food for thought the next time someone says the cloud is a disruptive technology, or that it will kill everything before it.

Penguin Flight Leads to Arrests in Prague

Three stand accused of stealing a plastic penguin from an art installation in Prague.

Prague police said on Tuesday that they had arrested three foreigners and charged them with stealing a penguin sculpture from Prague’s central Museum Kampa. The yellow plastic sculpture was one of an identical series perched on a stone wall outside the museum on the Vltava river and illuminated at night.

Three foreigners? Vague description.

The thieves had to get past a two-meter high fence to snatch the penguin, police said. Hulan added the financial damage to the museum totaled Kč 70,000. “It only took the police four days to apprehend the criminals,” the police boasted on their website.

The accused were reported to be a Russian, a Spanish speaker and a French speaker, all in their 20s.

Perhaps the Spanish speaker and French speaker refused to reveal their nationality. Or perhaps the Czechs are trying to make the point that if you speak Russian, then you are from Russia. Spanish and French…they’ll give you the benefit of the doubt — could be from anywhere. The Czech dislike of Russians could thus be similar to the American security analysts who find software that runs in Chinese and call that proof enough to say there’s a connection to China.

Virtualization and the DHS 20 for FISMA

The Department of Homeland Security recently released FY2011 CIO FISMA Reporting version 1.0, which has some interesting updates from prior requests.

Their new guidance that really caught my attention, however, is a working draft released by NIST that shows the DHS top 20 security controls for FISMA are “negatively impacted” by virtualization. No details or clarification of those terms has yet been posted, just this list.