Category Archives: Security

LSE Blog on Africa: Al-Qaeda Defunct

LSE has launched a new blog called “Africa at LSE” as part of their African Initiative, which takes a freshly critical view of Al-Qaeda’s effectiveness and its future.

Even though the Arab Spring has all but passed Al-Qaeda by, it may yet give al-Zawahiri an opportunity to stamp his authority in the early days of his leadership given his contacts and access to networks in Egypt and North Africa.

However, Dr Brahimi is sceptical that he will be able to do so.

“Al-Qaeda seem to miss every chance made available to them. For example, they squandered the enormous opportunity presented by the US-led invasion of Iraq, which was widely viewed as illegitimate.

“As a vanguard group supposed to be protecting Muslims, this was their moment to step up. They didn’t and their story in Iraq is that of shameful and unrelenting internecine bloodshed.

Buffer Overflow and Event Calculus

An interesting paper on measuring and controlling events.

This article presents the event calculus, a logic-based formalism for representing actions and their effects. A circumscriptive solution to the frame problem is deployed which reduces to monotonic predicate completion.

It uses the example of water filling a container to explain sequenced events without reference to time.

The domain comprises a TapOn event, which initiates a flow of liquid into the vessel. The fluent Filling holds while water is flowing into the vessel, and the fluent Level(x) represents holds if the water is at level x in the vessel, where x is a real number. An Overflow event occurs when the water reaches the rim of the vessel at level 10. The Overflow event initiates a period during which the fluent Spilling holds. A TapOff action is also included.

[…]

In other words, the formalisation yields the expected result that the water stops flowing into the vessel (at time 15), when it starts spilling over the rim, and that the level is subsequently stuck at 10.

I am reminded of system logs; events described on their own are all good until you try and map one system of measure against another system of measure…then you need some way to express events relative to a central measure, or time. The TapOn event would be at 5:15UTC, for example. I also am reminded of the scientists who marvelled at an Amazonian tribe.

Bitcoin Market Crash: Auditor Blamed for Breach

Ten days ago DailyTech gave a long and thoughtful analysis of Bitcoin economics and risk called “Digital Black Friday“. It boiled down to this paragraph, which sounds to me like the equivalent of regulation.

…market volatility poses a very serious risk to BTC users — be they miners, traders, or merchants who accept BTC as payment for goods or services. To that end, a major improvement would be for Bitcoin exchanges to implement mandatory market closures if the currency value dropped below a threshold. In theory this would be relatively easy to implement, and we expect that it will be done at some point to prevent one-day flash inflation/deflation

Of course this was viciously attacked by proponents of a “free economy” who argued that controls are always unjustified. Check the comments at the end of their story and you will find statements like this one:

The concept of artificial market limits has no place in a free economy and cannot stand in one.

DailyTech compared the crash to the stock market, which seems fair, but what they did not include in their analysis was the possibility of a malicious actor. They did not mention the risk of someone stealing accounts and then dumping all the Bitcoin. Yesterday they updated the story with the title “Mega-Hack of Bitcoin“.

Mt. Gox is admitting to a major breach and has shut down, in an unprecedented action. In all, approximately $8.75M USD worth of Bitcoins appear to have — at least temporarily — been stolen in the intrusion.

Suddenly all the individual investment accounts were 0wned by a single entity, and that entity decided they would exercise the freedom to dump value, crash the market and run. The response from Mt. Gox to reports of breached accounts, cited by Daily Tech, is notable.

As I already replied you, your funds were stolen by someone logging in onto your account with your password. Your funds are right now on a bitcoin address and have not moved since then.

As a reminder we assume no responsibility should your funds be stolen by someone using your own password.

The coins stolen from Mt.Gox were not stolen using any CSRF exploit… [the thieves] logged in on users account using the correct login and password. We have logs showing the loggin succeed on first try.

Blame the user is rarely, if ever, a safe response to security incidents. What actually happened, now documented by dataloss.db, is that the Mt. Gox user database was leaked. Then more than 100K Bitcoins were sold and hundreds of thousands more went missing.

It appears that someone who performs audits on our system and had read-only access to our database had their computer compromised. This allowed for someone to pull our database.

Interesting that they say it was someone who performs financial audits, as if to deflect blame. Here’s another way of saying the same thing: our system did not detect that an infected system was accessing our database, and we did not notice suspicious activity to highly sensitive data — that someone was downloading the entire database without authorisation/need.

Now the “anarchistic” darling of economic theory is facing questions of compliance: Why did Mt. Gox use weak ciphers to protect the database? Why would they wait to fix inactive accounts? Why did they allow read access to the database from a compromised system? Why did Bitcoin allow for a market crash? Now even free market advocates want to know.

Stuxnet: Anatomy of a Virus Sensational Video

I disagree with about 90% of this video, and find it annoying that they do not cite references — who says there were 20 zero-days? There were only 4, and even that is debatable, as I’ve said before. It’s a shining example of how speculation has filtered its way into to fodder for sensational videos.

Oooh, scary.

I do not understand how they can avoid mentioning that the guy who is credited with having the most detailed and first knowledge of Stuxnet — Ralph Langner — calls it “very basic”. He even explains how antivirus company researchers, infamous for hyping the threat, are wrong in their analysis.

Stuxnet attack very basic. DLL on Windows was renamed and replaced with new DLL to get on embedded real-time systems (controller). It was not necessary to write good code because of the element of surprise — only had to work pretty well

Nate Lawson gives probably the best and more authoritative explanation of Stuxnet available anywhere, which also contradicts the scary video. Unfortunately, he made a major marketing mistake. He called his blog post “Stuxnet is embarrassing, not amazing“. It’s a post with a modest and realistic view of the code.

Rather than being proud of its stealth and targeting, the authors should be embarrassed at their amateur approach to hiding the payload. I really hope it wasn’t written by the USA because I’d like to think our elite cyberweapon developers at least know what Bulgarian teenagers did back in the early 90′s.

What he should have called it was something like “What the next Stuxnet will look like” or “How Stuxnet could be 100x more powerful”. That would have given him the same level of buzz or even more than the nonsense peddled in the above video.

And what this video should have said is that Iran was infected by a low-grade attack because they had poor security management practices and were compromised by an insider. I mean what are the chances that the nuclear program would have succeeded anyway, given that maintenance failures and rust in thousands of centrifuges also was causing them problems? Or to put it the other way, what are the chances that a high-rate of failure of centrifuges was unanticipated, as explained by the Institute for Science and International Security (ISIS).

The destruction of 1,000 out of 9,000 centrifuges may not appear significant, particularly since Iran took steps to maintain and increase its LEU production rates during this same period. […] One observation is that it may be harder to destroy centrifuges by use of cyber attacks than often believed.

Although the attack was well planned and targeted to exploit a specific set of issues, it leveraged weak and known-bad controls such as unnecessary services, poor isolation/segmentation and no host-based monitoring. It is truly scary too see over and over again (for more than 10 years now) that nuclear energy companies rely on obfuscation and self-assessment more than a set of security best-practices to address risks. Calling Stuxnet sophisticated gives the Iranians far too much credit for their defences and just plays into the hand of those who want to escalate international political conflict.