12-yr Old Finds Mozilla Security Flaw

I remember in the late 1990s when people made fun of the Microsoft certification program because 12-yr olds were passing the MCSE test. That was a measure of what kids saw as the future career path and they were mostly right. Today the measure of success has shifted to security bounty programs, as reported by the San Jose Mercury News. I see attempts made to portray a 12-yr old bug hunter as brilliant:

“Mozilla depends on contributors like these for our very, sort of, survival. Mozilla is a community mostly of volunteers. We really encourage people to get involved in the community. You don’t have to be a brilliant 12-year-old to do that,” [Brandon Sterne, security program manager at Mozilla] says.

[…]

Alex is virtually self-taught, says his mother, Elissa Miller. Reading his parents’ very technical books is not an assignment, it’s something he just does; and he understands them. He has a “gift for the technical,” Elissa says.

The story mentions that it took only 900 minutes of testing (15 hours) for Alex to find the bug he reported. He received $3000 so that’s $200/hour by his own estimate. This probably does not account for all the times he sent in bugs before he found the right one and the time taken by Mozilla to tell him why and how those did not qualify. Even at four times slower it is still a good rate for kids. The bounty program is, in short, offering very well-paid training to find security bugs…which can be found by virtually anyone. Rather than call this an act of brilliance (as some also tried with the MCSE tests) we could call this further proof that the barrier to finding security flaws is actually quite low.

Companies that want to raise the barrier should invest in security management up to the executive level who will not let software go to market without passing a rigorous test. Apple just released a program to the web that did not prompt for the old password before allowing a new one to be created. This unbelievable mistake was said by some to be a result of “beta” status. Let’s get real. Apple security just played its hand — if you are watching this game you can now confidently say they lack the desire or capability to stop serious security bugs from going to market. They can’t even find the little ones.

Compare with the HMS Astute that just ran aground. Even the most sophisticated systems have errors, but a day after the incident the Royal Navy started talking court martial and criminal proceedings for the Captain of those errors. Until that level of seriousness is given to software, you can very well expect everyone to be capable of finding and turning in security flaws. It is not how brilliant attackers are but also the liability of defenders that gives a measure of security maturity and management.

Causes of San Bruno Pipe Explosion

The San Francisco papers continue to seek answers, along with safety regulators, about the San Bruno pipeline explosion. They say today PG&E employees are being held back from investigators because “they were too traumatized to be questioned”

This explosion should soon figure into every security for critical infrastructure review. Here is a good example why:

The safety board, which is leading the probe, said the pressure spike was caused by a power outage at an unmanned terminal in Milpitas, the end point of the 46-mile pipeline that runs through San Bruno.

An attack vector is now publicly open to discussion. Shutdown power in just one terminal, or increase flow by only ten pounds per square inch, and you can blow a high-risk natural gas pipeline. The threat profiles will now change in response, whether or not this was a one-time incident caused by a weakened line and the fact that it took PG&E 34 minutes after the explosion until crews were dispatched to manually close valves.

Now SIEM companies can talk all they want about detecting sophisticated malware that takes 8 months and 4 crack programmers from a powerful nation-state to create, with no known impact (e.g. Stuxnet), and I will have to say “let’s talk about San Bruno”. How and why did real-time dashboards fail on September 10, 2010?

Hi-tech Attack Sub Exposed

All the latest technology and training in the world was apparently no match for the shallow waters near Skye. The BBC says the Royal Navy’s newest, biggest and most powerful attack submarine — the HMS Astute — has run aground and exposed itself.

Aside from attack capabilities, it is able to sit in waters off the coast undetected, delivering the UK’s special forces where needed or even listening to mobile phone conversations.

Unless, of course, it runs aground. Well, at least out of those three capabilities they can still listen to phone conversations.

There is some chance the mistake is related to a new “platform management system”.

Speaking to the BBC last month, HMS Astute’s commanding officer, Commander Andy Coles, said: “We have a brand new method of controlling the submarine, which is by platform management system, rather than the old conventional way of doing everything of using your hands.

“This is all fly-by-wire technology including only an auto pilot rather than a steering column.”

Auto pilot? Every auto pilot I ever have used at sea has failed. The phrase also brings to mind the Exxon Valdez disaster, which was related to late night maneuvers outside the shipping lane while on autopilot.

Some interesting trivia about the HMS Astute can be found on Marine Buzz:

  1. Longer than 10 London buses
  2. Wider than 4 London buses
  3. Consumes 18,000 sausages every 10 weeks*, yet only has five toilets for 98 crew
  4. Produces oxygen from sea water and can purify the on-board atmosphere (see #3)

*approximately 2.623 sausages per crew member every day

Just when you thought stone and feet were confusing, now they have a London bus metric — 1/10 the size of the new class of attack submarine, and 1/4 the width. The next time a bus is late it will be hard not to say “maybe it ran aground”.

The Royal Navy boasts about their sub technology in the following video:

“We are something different. Something for the 21st Century.”

Making Security Usable

Maybe my sense of humor needs an upgrade, but I find this amusing. The School of Computer Science, Carnegie Mellon University, has a page called Technical Report Abstracts. The top of this page has the following details:

CMU-CS-04-135

Making Security Usable

Alma Whitten

May 2004

Ph.D. Thesis

Unavailable Electronically

The last line could be anything from a real warning to a really dry piece of comedy.

Whatever it is meant to convey, Alma Whitten (Google’s privacy chief) has conveniently made usable her thesis on errors (made it available electronically). Let us hope it was not by error.