History at LSE ranked #1

I was just informed that my Alma Mater, the International History department at LSE, has been ranked #1 in the 2011 Complete University Guide.

It was given an overall score of 100 out of 100 possible points.

Oxford was second with a score of 99.8. Hard to understand how Durham ended in third with higher graduate prospects and student satisfaction compared to Oxford, but perhaps research assessment and entry standards have more weight?

Congrats LSE. Go Beavers!

My experience there was excellent. I primarily studied international security during the Cold War in Asia, Africa and Europe. My thesis was on defense ethics strategy, information warfare, and long-term global security impact from military occupation of the Horn of Africa: “Anglo-Ethiopian Relations 1940-1943: British military intervention and the return to power of Emperor Haile Selassie“.

Some ask me what this has to do with information security. I usually point to two areas:

  1. Security is essentially a taxonomy of authority: who did what, where and when. History students read books and write analysis of events to provide a coherent and accurate picture of an incident in much the same way that security analysts look at computer data. Instead of reading first-person accounts I now read computer logs and similar output. Designing effective controls depends on a clear understanding of risk, which comes from an analysis of vulnerability and threat information from past events. It is no coincidence that security professionals, especially those in the armed forces, are usually so interested in studying history.
  2. The invasion and occupation of Ethiopia by Britain in 1940 created a delicate mission of stability and security. British authorities, who were known best for a legacy of imperialism, aimed to (re)establish sovereignty in a country proud of its independence. There are some clear lessons to be learned from the decisions and outcome of this policy, especially when looking at present-day objectives for US operations in Afghanistan and Iraq. The result of the West’s post-WWII policy for the Horn of Africa, briefly put, was a failure in terms of security. It instead precipitated revolution, invited territorial war (with Somalia) and fueled an anti-American military party (the Derg) rise to power. The region’s instability coupled with diminished American influence continues to present serious security problems for the West even today (e.g. piracy and safe-harbor for terrorists).

Thus, the study of international history can teach excellent event/incident analysis and reporting skills that transfer easily into information security and risk management.


This seems to be a popular search:


Sometimes it is just this:


Could this be meant for XLSX; the flaw in Microsoft decompression of XLSX files?

The vulnerabilities could allow remote code execution if a user opens a specially crafted Excel file. An attacker who successfully exploited any of these vulnerabilities could gain the same user rights as the local user.

The problem was from a lack of validation on the ZIP header when the XML was decompressed. This allowed memory space to be exploited and then remote code could be executed. The vulnerabilities were reported (seven of them) in July of 2009 and Microsoft released a fix in March 2010 with MS10-017

Not XLlpX, but similar.

Kim Cameron on Google Wifi Surveillance

A storm is brewing between Kim Cameron and Ben Adida. I do not mean to step into the middle of their dispute over the Google Wifi surveillance fiasco. Both have good arguments to make.

I noticed, on the other hand, that Cameron makes a huge error in the blog post today “Misuse of network identifiers was done on purpose“.

SSID and MAC addresses are the identifiers of your devices. They are transmitted as part of the WiFi traffic just like the payload data is. And they are not “publically broadcast” any more than the payload data is.

Actually the SSID and MAC are broadcast more in the public in my view since they are part of the handshake required to both negotiate a connection as well as avoid collisions. It is like the front door to a house that has a unique number on it for everyone to see and know whether it is the right door or not.

There is an argument to be made that doors should be able to be hidden, but the implementation of WiFi thus far does not offer any real way to hide them. So I say they are more public than data, which lives behind the door.

The identifiers are persistent and last for the lifetime of the devices. Their collection, cataloging and use is, in my view, more dangerous than the payload data that was collected.

That simply is not accurate.

The MAC and SSID can be easily changed. I just changed it on my network. I just changed it again. Did you notice? No, of course not. Anyone can change their MAC to whatever they want and the SSID is even easier to change. I do not recommend fiddling with the SSID if you have a good one, since it can be seen anyway and doesn’t offer much information to an attacker. My only recommendation is to think about what you are advertising with it. Changing the MAC is nice because you can hide the true identity of your device. Attackers would think it’s a Cisco, for example, when it really is D-Link or SMC.

I am no fan of Google’s response on this issue. A defense is centered around “our engineers did not know what they were doing” or “we didn’t realize that Wifi scanning would collect all this data” is hard to believe. Who in the world writes surveillance software without factoring the risk to privacy and security? That seems rather obvious, which could might be why Congress is getting involved.

“As we have said before, this was a mistake. Google did nothing illegal and we look forward to answering questions from these congressional leaders,” a Google spokesperson said in an email.

A mistake? The first thing anyone should do when turning on WiFi capture tools is verify they will not be in violation of ethics and law. It reads from their defense that they still do not think surveillance would violate a law. This could have a major impact on the security industry and begs the question of ethical data capture.

Google really needs to explain themselves in terms of the basics. They are being asked “why did you just drive to all the homes in town and take photos through the windows?” If they answer “we had no idea we actually were taking pictures of anything” you are right to be suspicious. That’s what cameras do…

Over the years Google has told me their organization is flat and all the engineers are just expected to do the right thing. They throw their hands up at the idea of someone telling anyone else what is or is not allowed. They balk at the idea of anything organized or top down. They are open to passive education and seminars but nothing strict like what is needed for compliance — thou shalt not wiretap without authorization, thou shalt disable SSLv2. Compliance means agreeing with someone else on what Google can and can not do; that is a hard pill to swallow for an engineering culture that prides itself on being better than everyone else.

Although it is easy to see why they are trying to emulate an academic model they grew out of (all schools tend to struggle with the issue of how to secure an open learning environment) they are clearly not a learning institution. Aside from modeling the “academic peer review” process into a search algorithm, they have completely left the innocence of a real campus behind. Their motives are not anywhere close to the same.

Collecting personal data for academic research in a not-for-profit organization could make sense somewhere to someone as a legal activity. Taking profits from collecting personal data to fund surveillance software that runs around the world to collect personal data…uh, hello?

A naive political structure regarding security management and lax attitude to consumer privacy is what will really come into focus now at Google. They need a security executive. They seem to need someone to explain and translate right/wrong (compliance) from the outside world into Google-ese so their masses of engineers can make informed decisions about how to change or come to understand laws they run up against instead of violating them and only then trying to figure out if they should care.

New regulations and lawsuits may help. I am especially curious to see if this will alter surveillance laws. Google of course may plan just to dismiss the results as more outsider/non-Google concepts. The big banks are known to have a capital offset fund to pay fines rather than comply. Large utilities are known to have a “designated felon” who gets to go to jail on their behalf rather than comply. It is definitely worth noting that executives at BP said the $87 million OSHA fine for safety violations last year did nothing for them. It did not register as enough pain to change their culture of risk and save employee lives — it certainly did not prevent the Gulf spill after the loss of a $500 million rig. The spill itself is said to so far have cost BP $930 million and cost the government $87 million. The damage to the economy and environment is yet to be determined. Will BP change?

Although Cameron is wrong on technical points he is right in principal. The Google Wifi surveillance issue has exposed what appears to be a systemic issue for Google management to instill privacy and security due diligence in their engineering practices.

Encrypted iPhone easily accessed using Ubuntu 10.04

There is a story flying around, thanks to Engadget, that claims “Bernd Marienfeldt and fellow security guru Jim Herbeck” have “discovered” a security issue in the iPhone.

Do they really have to use “guru” as their title? It immediately gives me doubt about the sophistication of the story, but I digress…

They say the issue is that if you connect an iPhone, using the phone’s USB cable, to a computer running Ubuntu 10.04 the phone is mounted and accessible.

The phone can be locked, the phone can be encrypted, but the Linux system will still mount the phone and provide open access to its filesystem.

First, some would say this is clearly a security hole because Apple has “tested and confirmed” that it is one. They bank their argument on the word “confirmed”. Apple also has stated it has no fix and does not plan on having one. This latter statement is what I would like to call your attention to today.

Note that the ability to mount an iPhone in Linux and natively access its files has been a public project under development since 2007. The alpha code was released as iFuse in early 2009 and tested by many people. Towards the end of 2009 iFuse became libimobiledevice and it was so successful several major distributions have included it in their packages:

It is from this context that I find it a bit odd that “security gurus” have tried to claim “discovery” of this functionality and brand it a flaw. While the passcode has never prevented me from mounting the iPhone in Linux the libmobiledevice project says this has not been their experience.

27.05.2010: Some security sites report that even passcode enabled devices get auto-mounted. We could not reproduce this yet. However it might point at some bug during boot in the iPhone OS. Accessing a passcode enabled device the first time does not work in our tests as one would expect. Devices taking more time booting might be affected though, on any OS.

Maybe there is an intermittent issue here, but I am able to reproduce it on all my Linux systems. In fact this is the only way it has worked for me over many months.

I considered the iPhone insecure for this as well as many other reasons. That is why I believe the real issue here boils down to whether you consider the iPhone a secure device or not. Do you?

If you are in the camp that thinks the iPhone can be a secure device, then once again you are in for a surprise. It is not, and this is definitely not the first time this has been discussed openly. Anyone with a computer and a Linux CD has been able to access everything on your phone for over a year. Moreover, there has been a rash of attacks that target people who are actively using the iPhone. Thieves know that if they get the phone away from the owner when it is not locked they have easy access to the data; owners should know this too.

If you are in the camp that does not think the iPhone can be a secure device…you are right. In fact, you might even work for Apple and be one of the people who said “we have no fix and no plan to make a fix” to any number of the control points for data confidentiality.

In other words the absence of a plan to make a fix says to me that Apple does not see this as a serious flaw, let a lone a flaw at all. They perhaps just confirmed that Linux is able to read the filesystem properly in the same way that a thief who grabs the phone can use apps and access data.

Here is what anyone who plugs an iPhone into a Linux computer can see:

1) Plug the iPhone into the computer using the Apple USB cable that comes with the phone. You will see it appear as a mounted filesystem (Apple File System or AFS) on the desktop.

2) Then you will be prompted with two dialog boxes, one for music and one for photos:

3) You can choose to browse the filesystem from those dialog boxes, instead of opening applications to manage music and photos. Or you can cancel the dialog and just open the filesystem to browse from the desktop. Either way, full access to unencrypted data without needing to know the PIN. Surprised?

I downloaded and installed Ubuntu 10.04 the day it was released and the iPhone has always appeared this way to me. It did not seem to me that my data was any more exposed than I had already thought.

Perhaps I am giving the Apple team too much leeway when I say there is no new issue here and no fix needed, but I also do not think anyone should have seriously considered the iPhone to be a secure or safe device. It is highly unsafe at any speed. News?

Even in a physical security review I immediately found it designed to be incredibly fragile and prone to disaster. At least once a week I see a twitter from someone about an iPhone failure. Not news.

Perhaps for the same reason Apple put the infamous and unreliable sensitive water sensors in the iPhone that void your warranty when triggered, no one should operate one under the assumption that this device is designed to protect data without significant outside controls and enhancements.

Giant foam cases, screen covers, vacuum sealed bags…the list goes on and on for things to buy to protect the phone. None of it seems to be from Apple. Likewise we have known for years that proper encryption and authentication for the filesystem is something you will not be getting when you purchase an iPhone. I do not feel knowing this about Apple products can really be called enlightenment.

The syllable gu means shadows
The syllable ru, he who disperses them,
Because of the power to disperse darkness
the guru is thus named.

Advayataraka Upanishad 14-18, verse 5

Super Surveillance Technology

A problem with technology that collects ambient data is that it is basically spying on everything all the time. This creates two distinct issues.

1) The first very obvious problem is with privacy. I say this is obvious even though Google just claimed they made a “mistake” collecting all kinds of wireless data around the world with mobile sniffers.

Regulation through policies and procedures is usually proposed as the solution to this first problem. The fact that Google is being threatened with legal action by privacy officials in numerous countries is an example of how this control point can work. Technology also can help with authentication, authorization and live audit trails of who accessed what data and when.

2) The second problem is that too much data will overwhelm analysts, and analysts are expensive. Collecting too much data is not only bad for PR and legal conformance, it also makes a surveillance system impossible to keep up with and make useful. Who has the time or resources to keep up with massive amounts of information and find anomalies quickly? Automation technology is typically proposed as the solution to this problem, but it can also be expensive.

Que the military. They can justify the cost of solving the second problem. The military operates in environments where they collect massive amounts of data unfamiliar to most analysts (training becomes more specialized so costs are far higher) and time to respond is more of the essence. It also helps them that the first problem quickly erodes when dealing with data in a hostile environment.

Take this example provided by BBC News

One technology that BAE Systems trialled, known as a “hyperspectral camera”, is able to analyse colour – to distinguish a camouflaged vehicle from the vegetation it is concealed within.

Gary Bishop from BAE’s Advanced Technology Centre in Bristol told BBC News: “You see things with your eyes in three wavelenths, the hyperspectral camera gives you information in 10.”

The system measures each wavelength of light being reflected by an object – it can see the specific type of green that is produced by chlorophyll in plants, and distinguish that from the green of paint or dye.

Everything in the article centers around systems that create data mining efficiencies.

The military needs to quickly detect unusual patterns within otherwise normal data. This, as mentioned above, is good not only for automation but it also has the secondary effect of helping to protect privacy in civilian surveillance systems.

Automation means humans can be removed from the role of sifting through private and protected information to find a suspicious data point. The surveillance system could be setup to keep everything under wraps and only expose information required to review and confirm. Access only to suspicious event data is far less controversial than access to all data. The more limited access also can be logged and audited later. That means in the end you get access to more data but less privacy risk…assuming you trust and verify the system is operating properly.

This still begs the question of whether it is ethical to collect data in the first place, as in the case with Google. What were they thinking?

“If the company is fighting this so hard, it suggests there is more to this than meets the eye,” said Mr. Davies, of Privacy International. “The real question is: What was Google collecting from unwitting individuals and why? So far, nobody really knows.”

Perhaps at this point they should try to mount a “we were trying to find terrorists” defense…certainly sounds better than “programming error” that ran around the world and for an extended time gathering a massive amount of data.

I have to wonder for a company that has a very public emphasis on hiring the best and brightest whether they really expect anyone to believe that surveillance was not intentional. Most security professionals balk at the idea of capturing packets from random airspace — it’s known to be unethical if not illegal in most contexts. Why Google did not properly account for the risks of surveillance is hard to understand.

the poetry of information security