How the ANC used encryption to help defeat apartheid

The following paragraph is from an opinion piece last year by CNN National Security Commentator Mike Rogers, called “Encryption a growing threat to security“:

Back in the 1970s and ’80s, Americans asked private companies to divest from business dealings with the apartheid government of South Africa. In more recent years, federal and state law enforcement officials have asked — and required — Internet service providers to crack down on the production and distribution of child pornography. And banks and financial institutions are compelled to prevent money laundering by organized crime and terrorists finance networks.

All of this is against companies’ bottom-line business interests, but it has been in the public interest. These actions were taken to protect the public and for the greater good. And all of it was done to mitigate a moral or physical hazard.

Don’t know about you but that “apartheid” line jumped right out at me. African history doesn’t come up enough on its own let alone in the crypto debates. So my attention was grabbed.

Let me just say I agree in principle with a “greater good” plea. That’s easy to swallow at face value. However, a reference to fighting wrongs of a South African government while talking about encryption as a threat to security…Rogers makes a huge error here.

My first reaction was tweeting in frustration how Biko might have survived (he was taken captive by police and beaten to death in prison) if he had better privacy. I mean history could have turned out completely different, far better I would argue, had activist privacy in South Africa not been treated as a threat to national security. Encryption could have preserved the greater good. I’ll admit that is some speculation on my part, which deserves proper research.

More to the point against Rogers, South Africa severely underestimated encryption use by anti-apartheid activists. That’s the fundamental story here that kills the CNN opinion piece. Use of encryption for good, to defeat apartheid, is not a secret (see “Revolutionary Secrets: Technology’s Role in the South African Anti-Apartheid Movement,” Social Science Computer Review, 2007) yet obviously it needs to be told more widely in America:

…development of the encrypted communication system was key to Operation Vula’s success

Basically (no pun intended) hobbyists had taught themselves computer programming and encryption using a British computer called the Oric 1 and some books.


An Oric 1 only cost £100 and was quite popular in the 1980s. You could say it had a following comparable to the Raspberry Pi today and therefore provides an extremely relevant story. With only a little investment, study and careful planning by ordinary people “Operation Vula” used encryption to fight against the apartheid regime.

When the operation was finally uncovered by the police in 1990 they knew too little and too late to disrupt Vula. Nonetheless to the very end the government accused people of terrorism when caught using encrypted communication; buildings using encryption were called “havens for terror“.


So my second reaction was to tweet “please watch ‘Vula Connection’ how a South African man used encryption to turn against his gov and end apartheid” to try and generate more awareness. It had 247 total views on that day; now, nine months later, it still has only 7,766. Not bad, yet not exactly a huge number.

I also tweeted “The Story of the Secret Underground Encryption Network of Operation Vula, 1995” for those who would rather read Tim Jenkin’s first-person account of crypto taking down apartheid.

His prison-break (please read Escape from Pretoria – a video also is available) and secure communication skills are critical to study thoroughly for anyone who wants to argue whether encryption is a “threat to security” in the context of apartheid and the 1980s.

Here is Tim Jenkin explaining what he did and why. Note there are only 185 views…

My third reaction was to contact the organizers of the RSA Conference, since it has a captive crowd in the tens of thousands. I know my tweets have limited reach (hat-tip to @thegrugq for immediate sub-tweets when I raise this topic, extending it to far wider audiences). A big conference seemed like another way for this story to go more mainstream.

So I suggested to conference organizers we create a “humanitarian” award, setup a nomination system/group and then I would submit Tim Jenkin. While Tim might not get the formal nod from the group, we at least would be on the right road to bringing this type of important historic detail forward into the light.

All that…because an op-ed incorrectly tried to invoke apartheid history as some kind of argument against encryption. Nothing bothers a historian more than seeing horrible analysis rooted in a lack of attention to available evidence.

So here we are today. RSA Conference just ended. Instead of Tim Jenkin on stage we were offered television and movie staff. CSI, Mr Robot, blah. I get that organizers want to appeal to the wider audience and bring in popular or familiar faces. I get that a lot of people care about fictionalized and dramatized versions of security, or whatever is most current in their media and news feeds.

Not me.

It was painful to sit through the American-centric entertainment-centric vapidity on stage when I knew I had failed to convince the organizers to bring lesser-known yet important and real history to light. Even if Tim Cook had spoken it still would pale for me in comparison to hearing from Tim Jenkin. The big tech companies already have a huge platform and every journalist hanging on every word or press release. Big tech and entertainers dominate the news, so where does one really go for an important history lesson ready to be told?

What giant conference is willing to support telling of true war stories from seasoned experts in encryption, learning new insights from live interviews on stage, if not RSAC?

And beyond learning from history, Tim Jenkin also has become known for recent work on a free open source system helping people trade without money. Past or future, his life’s work seems rather important and worth putting out to a wider audience, no?

It would have been so good to have recognized the work of Tim, and moreover to have our debates more accurately informed by the real-world anti-apartheid encryption. If only RSAC had courage to bring the deeper message to the main stage outside of the cryptographer’s panel. I will try harder to encourage this direction next year.


Why Were 150 Somali Militants Killed in a US-led Air Strike?

The US used aircraft to drop explosives in Somalia, killing a large number of people. This abruptly has reminded many people of the existence of ongoing US military operations there, under the aegis of Africa Command (AFRICOM), and I see confusion in my social networks. Perhaps I can help explain what is going on.

Allow me to back up a few years to give some context.

The Shift from Covert to Overt Operations

US military European Command (EUCOM) leaders realized ten years ago they needed a more focused and local approach if expected to run “stability operations” in Africa. Do you remember in 2006 when the ICU (Islamic Courts Union) defeated CIA-backed warlords in Mogadishu? In response, the US special forces backed a 2007 Ethiopian invasion of Somalia to retake control and remove the ICU, as I wrote in posts here called “Ethiopia rolls 1950s tanks into Somalia” and in “Ethiopian invasion of Somalia“.

This public EUCOM “stabilization” effort, to use Ethiopia as a proxy military power after the CIA lost covert control, effectively created a huge sucking sound; a vacuum of leadership and instability (free market) was left behind after a neighboring state intervened. The ICU essentially transitioned into Al Shabaab at that time. Although that transition event might seem obscure, most Americans actually have heard of the piracy issues it generated. Unregulated seas and collapse of safe markets led to pirates, which became a major news story and headache in global shipping, as you can see in a simple SIPRI graph illustration of President Bush’s 2007 Foreign Policy results:


The question before US politicians back then was how to build overt military operations in the Horn of Africa, almost exactly like 1940 for Britain to decide, that push for state control while being considered a light-touch state-building (aid) or at least state-support (self-defense) operation (mostly ignoring global piracy issues and wider regional market instability it would create).

Foreign Military Support for African States

EUCOM knew even before 2006 that the US needed a more focused regional approach in Africa to achieve its assigned policy aims. Africa obviously isn’t part of (post-colonial) Europe so change to a more focused regional resource was overdue. Thus, to formalize and better focus emerging intervention and military support policies for Africa, AFRICOM was created in 2008 under President Bush:

This new command will strengthen our security cooperation with Africa and create new opportunities to bolster the capabilities of our partners in Africa. Africa Command will enhance our efforts to bring peace and security to the people of Africa and promote our common goals of development, health, education, democracy, and economic growth in Africa.

This presidential declaration eight years ago of bringing peace to Africa might seem a long stretch from the very recent news of US warplanes bombing Somalia. Bear with me for another minute.

The mission of AFRICOM originally was described as cooperation and augmentation of African governments against destabilization; a mission of dealing with “failing states” rather than taking on war-fighting or “conquering state” objectives. This is of course a bit ironic, given how it rose from the ashes of Somalia invaded by Ethiopia. To be fair though AFRICOM being established under Bush offered the chance for a different future and more locally relevant options than under EUCOM. Although I’ve studied military operations on the Horn of Africa all the way back to the 1930s this major policy shift in 2008 is a good place to start looking at American reasons for being in Somalia today.

Policy Shift and the Acceptance of Foreign Military Support

Creation of AFRICOM was not without controversy at the time, as explained by FOX news.

Most Africans don’t trust their own militaries, which in places like Congo have turned weapons on their own people.

So “they don’t trust Africom, either, because it’s a military force,” Okumu [Kenyan analyst at South Africa’s Institute for Security Studies] said. There is also “a suspicion America wants to use us, perhaps make us proxies” in the war on terror.

AFRICOM initially was to take control and run an existing base in Africa, as well as support the increasingly wider regional military objectives. Aside from pushing a 2007 Ethiopian invasion of Somalia to bring down the ICU, US policy was following at least two prior initiatives: One, in 2002 a US military Combined Joint Task Force base was established in the Horn of Africa (CJTF-HOA), staffed with thousands of military personnel. Two, by 2005 a “Flintlock exercise” among many African security forces was being led by the US across the Sahel region (from Djibouti to Senegal).

Thus it makes sense why some were worried that US operational bases with proxy combat missions could be a result. We may never know how AFRICOM was intended to roll out because, after Bush’s grand political hand waving about humanitarian missions and economic stabilization, Obama came into ownership in 2009 with different thoughts on foreign policy.

It seems to me the worries about intentions were well-founded. Obviously Bush already had been caught lying, or at least willfully ignoring truth, in order to invade Iraq. That alone should give everyone pause. His use of Ethiopia in 2007 appeared to similarly be a thinly-veiled destruction of Somali sovereignty to maintain CIA access for renditions and executions on foreign soil without declaring war. Bush foreign policy was so US-centric it raised concerns about, for lack of a better phrase, dumb imperialist thinking.

So in 2009 a new president came into ownership without the same legacy and policy baggage. Obama soon gave speeches that started a slightly different spin on US partnerships with African states:

When there’s a genocide in Darfur or terrorists in Somalia, these are not simply African problems — they are global security challenges, and they demand a global response.

A good indicator of where AFRICOM headed under the new US leadership was seen in operation Celestial Balance, as I wrote in 2010. Tactics changed under an Obama doctrine through more intelligent and less heavy-handed methods of “direct action”, a euphemism for unmarked black helicopters appearing suddenly and killing people identified as threats to America…ahem, I mean global security.

Obama would say privately that the first task of an American president in the post-Bush international arena was “Don’t do stupid shit.”

The difference was significant.

A former president believed evidence was an inconvenience and a latter president wanted carefully weighed and measured outcomes. Less fanfare, less flowery, clear and surgical operations, based on strong evidence, led to highly targeted missions, albeit without much outside review or transparency.

Then, rather than condemn the new US foreign policy doctrine of AFRICOM and US “actions” in Africa, Somalia’s new government warmed to the program and called for even more collaboration against threats.

 In a series of interviews in Mogadishu, several of the country’s recognized leaders, including President Sharif, called on the US government to quickly and dramatically increase its assistance to the Somali military in the form of training, equipment and weapons. Moreover, they argue that without viable civilian institutions, Somalia will remain ripe for terrorist groups that can further destabilize not only Somalia but the region. “I believe that the US should help the Somalis to establish a government that protects civilians and its people,” Sharif said.

It appears, from my reading of the Somali perspective over time, we can not easily write-off AFRICOM as the proxy war engine it could have become. There have been no new American bases built. Instead we have seen state-building, or at least assistance in state self-defense, pointing in the direction of augmentation and support. We can criticize transparency, but so far we don’t have a lot of ground to call Obama’s “direct actions” policy a purely self-serving war using African states as proxies.

The Bush administration was right to heed EUCOM establishing new focus, creating AFRICOM; it appears only to have been wrong in how it thought about supporting intelligence operations and its disregard for economic impact. Hard to say whether Obama has been right, but it is likely not worse than before (no longer threatening sovereignty, no longer undermining regional economic viability).

Somalia, let alone the African Union Mission in Somalia (AMISOM), has continued talking about being a partner on global security efforts. This is unlike the 2007 Ethiopian invasion with US objectives front and center, aligning awkwardly with other nations or prodding them into going along also for self-interest.

The US currently is feted as a partner in regional Horn stabilizing missions rather than owner or operator. Local stability and growth policy using global partnerships isn’t an entirely awful thing, especially when we see China talking about and doing much of the same in its foreign relations for this region and throughout Africa.

Why An Air Strike?

Ok, so enough background. Back to the present tense, what’s with bombing hundreds of people?

According to a tweet by the BBC Africa Security Correspondent, Tomi Oladipo:

both Al Shabab & residents confirmed militants hit. Dispute is over death toll.

Everyone on the ground seems to agree casualties were militants and not any civilians. I have not seen anything contradicting this: militants massed in a training camp were preparing to graduate and execute a mission to undermine regional stability. The only major caveat to the reporting and news is Al Shabaab has been known to infiltrate news organizations to murder journalists it disagrees with; local reporting can be hard to gather.

I asked Paul Williams, Associate Professor at GWU and author of “War & Conflict in Africa“, if this strike could be seen as a prevention measure, given recent Al Shabaab attacks. He quickly confirmed that as true:

#AMISOM reconfiguring after Leego, Janaale & ElAdde to avoid a repeat.

If you’re familiar with those three references to Al Shabaab attacking security camps you easily can see why this strike to their camp fits regional conflict patterns, with the US serving to help local government forces maintain control and protect civilians.

With this in mind I would like to address four questions raised by Glenn Greenwald about the attack:


Were these really all al Shabaab fighters and terrorists who were killed? Were they really about to carry out some sort of imminent, dangerous attack on U.S. personnel?

Yes, we see credible accounts of imminent danger, in the pattern of recent attacks, from an Al Shabaab militant camp. It almost could be argued that this attack was in response to those earlier militant attacks; a better self-defense plan was called for by local authorities (Somalia and Kenya) after those disasters. US personnel were in danger of attack by nature of working with the authorities targeted by Al Shabaab. We also don’t have details on the attack planned but it very well could have been similar to Westgate or Garissa University.


There are numerous compelling reasons demanding skepticism of U.S. government claims about who it kills in airstrikes.

Yes, big fan of skepticism here. At the same time, by all accounts and recent events, this appears to be a clear case of a military camp being destroyed to prevent terrorist attack later. BBC made the casualty type clear. Recent Al Shabaab operations, attacking Kenyans while in camp, should further erode skepticism around motive and opportunity of attacking militants while in their camps. I have not yet seen evidence civilians were in these camps. South Sudan, just for comparison, has been a completely different story.


We need U.S. troops in Africa to launch drone strikes at groups that are trying to attack U.S. troops in Africa. It’s the ultimate self-perpetuating circle of imperialism

This is lazy and shallow reasoning. If US troops left Somalia there would still be attacks by Al Shabaab on the authorities there. Whether you agree or not with supporting the local regime, it is not fair to say the only purpose of US troops is to act like a target for the premise of self-defense to attack US enemies. We have credible evidence that it goes beyond a proxy conflict, and the US is in fact assisting local authorities who are under attack. We can debate the integrity of a US-backed authority and their role in calling for assistance, yet it is clear Al Shabaab is a threat to far more than just Americans.


Within literally hours, virtually everyone was ready to forget about the whole thing and move on, content in the knowledge — even without a shred of evidence or information about the people killed — that their government and president did the right thing.

Surely I will be called an exception here, as I mentioned already I’ve studied conflict on the Horn for over two decades and have undergrad and graduate degrees focused on it, yet I do not find lack of interest to be true for the general population. There never has been more interest in this region than today.

This blog post was written because people were talking in general conversations about these killings. The fact that the story initially was brought up as a drone attack meant it drew a lot of attention. Conversations went on for hours just about the technical feasibility of drones to carry out such a large attack.

Granted we should be paying more attention. That seems like a great general principle. I am seeing more people pay more attention than ever before to issues and a part of the world that used to be obscure. Within literally hours everyone was asking questions about what happened, who really was killed, and why. It is actually quite a shock to see Somalia so much in the news and Americans digging immediately into the details, asking what just happened in Africa.

Updated to add a “mapping militants” project chart of Somalia, which better illustrates why power fractures and allegiances are complicated.


RSAC 2016: Thoughts and Memories

Three things stood out to me at RSAC this year:

  1. Diversity
  2. Business and Innovation
  3. Collaboration


Usually I have some general unease or complaint in this category. Not this year. While I did tweet there was an annoying lack of diversity in keynote speakers, overall the conference felt more diverse than ever before.

Walking the expo and the conference talks felt like being in a major international city. Waves of experienced and new, young and old, male and female were noticed, with many cultures clothing type and styles easily found. It felt like security community was being represented across an extremely wide spectrum, wider than I had ever seen before. I talked briefly with a woman wearing a Niqab attending sessions (might have to do this myself next year). And while it was easy to hear the big delegations of Israelis, Chinese, Russians, Germans wandering around I also was happy to run into a Palestinian cryptographer who wanted to talk Cloud.

Business and Innovation

Every year I do an extensive tour of the Expo and interviews to find useful products. Some tend to argue “security 1%-ers” are the only people who really would benefit from the expo and everything is positioned to be a silver bullet. That’s obviously untrue.

Adi Shamir walked with me to a booth, for example, so I could show him what I thought to be an interesting development in hardware authentication. The conversation went something like this:

  • Me: it’s interesting to see a stereo jack token form-factor. resilient, easy…
  • Adi: one form, another form, who cares. use the USB port instead. they’re all just form factors. energy harvesting? AHA! now THAT is interesting
  • Me: form factor is a problem space that needs better solutions. energy harvest wouldn’t get users excited but the security issues are something to review
  • Adi: yes, the things we can do with energy
  • Me: given low capacity we can blast with energy to cause to fail, break, overheat
  • Adi: this is not that interesting, but there are other things…

He and I were approaching things from completely different objectives. I was thinking about how to solve for user requirements; can we get these in hands immediately to improve multi-factor usage rates. He was thinking about how to solve for engineering requirements; can we break this thing.

Tools we were looking at and discussing with the vendors were not for the 1%. They were not silver bullets. They were meant for mainstream use and very focused in their application. Many such tools could be found. The problem really is not that this kind of every-person stuff does not exist. The problem is marketing is actually extremely hard in security. If you think the buzzwords, costumes and flashing plastic garbage are annoying, you’re probably right. It just verifies how hard it is to do marketing well, to reach a wide audience with a tight message.

And that’s one of the coolest things about RSAC. So many different approaches and ideas are launched just to see if they work; we might actually find something good. It is an opportunity to find or develop mainstream tools from a diverse field of ideas. This is where people are talking about all kinds of solutions and partnerships.

On the other hand, it’s also important to look carefully for 1%-er solutions.

About five years ago at RSAC I spoke with a flash memory vendor promoting their new devices, and quickly I figured out we were going to have problems with data destruction. It was a 1%-er issue then, an early look into what was coming. In the following years I saw papers being published, almost exactly like the conversation at RSAC, about ease of extracting data from flash. And now this year I found this 1%-er issue has gone mainstream: vendors push specialized products (an extreme opposite of silver bullet) towards commodity prices to close a gap. If you have flash devices and need to destroy data, there were some small engineering-oriented vendors you should have been talking with.

Intelligence and knowledge systems are the 1%-er space of today, which actually parallels a trend in general IT. Stock up on “threat” feeds, run analysis on it with visualization, and maybe even apply learning algorithms or think about how to leverage artificial intelligence. While I could beat up our industry for going all 1%-er on this area, the wider context of overall IT puts it in context and we’d be fools if our industry didn’t jump in now. The people adopting today, or at very least discussing, are at RSAC setting the stage for what will become 99% tools five years ahead.

A customer asked me a few weeks ago to build a specific threat feed solution. So at RSAC I set about the expo floor asking every single vendor I could to give me their proposed solution. It was actually comical and fun because it challenges the marketing folks to deliver on the spot.

Symantec came across as an utter disaster. They literally could not find anyone, over two days, to speak about their products. Sophos was all ears as I ended up telling them how good their data could be if they packaged it again for the right consumers. They apparently weren’t aware of the demand types and seemed curious. Kaspersky kept shaking my hand, saying the right people need to be found, and telling me we can do business together while not actually answering technical questions. Fireeye sent me to their head of a new group focused on the exact problem. Very impressed with the response and quick, competent handlers. Clownstrike said they have what we need and then just walked away. LOL. Recorded Future gave me a long and detailed hands-on demonstration that was very helpful…all of which ends up in a report that goes to a customer.

To put it bluntly, this year felt like the rise of private intelligence and I expect to see this field of “knowledge” tools for analysts grow significantly over the next 2-3 years.

The inverse of this type of prediction exercise is noticing the buzzwords most likely to have disappeared: GRC, DLP, APT. Apparently vendors are realizing that the great analyst hype for some of these “tool” markets did not pan out. Do we blame the analysts who predicted these markets would boom, and created the product race, or blame the vendors who jumped in to run it?

Regulations and compliance seemed to be showing up everywhere, being discussed all the time, without being pushed obnoxiously as some kind of new thing to buy. HIPAA! PCI! No, we didn’t see that at all. There was no yelling about regulators, and at the same time it was mentioned in talks and product marketing. Compliance was pleasantly subtle, perhaps indicating an industry maturity level achieved.

Last but not least I was sad to see a lack of drone research. Despite having talk tracks on the subject, and a huge boom in drone-related security concerns, we really didn’t find much evidence of a market for security in this space yet. An investor literally told me he’d find us a billion dollars to solve some very specific drone security issues, yet walking the expo there were no offerings and no evidence of products or strong technical skills in this area.


With new levels of diversity, and innovation, it probably goes without saying there was an air of collaboration. While there are plenty of private parties and VIP events (literally 1,000s of side-conferences) for business to be done by old friends behind closed doors, what fascinated me was the interactions out in the open. Bumping into strangers all day and night is where things get interesting, especially as you hear “let me introduce you to…” all around.

A big concern is that there are solutions lurking around and missing their target audience. I’m speaking with some ex-Cisco guys one day who have developed a healthcare IoT fingerprinting tool. Don’t ask me why they chose healthcare, yet that’s their very narrow approach right now. The next day I’m watching my twitter feed light up about the lack of security tools designed for healthcare IoT. How do I get these two groups collaborating? RSAC is a place where I can try to make it happen.

The keynotes emphasized collaboration in a fairly formal way. Government should talk with private sector, yada yada, as we always hear. More practical is the fact that you could walk into a booth and overhear the Norwegian military discussing some use case specific to their plans for invading Finland, and then jump in and start a broader discussion about different tools and procedures for protecting doctor privacy in Africa.

Walking up and talking to strangers led to some excellent follow-on meetings and conversations around how we could work together. I dragged three friends with me into a session on hacking oil and gas, which turned out to be great fodder for conversation with a guy from NIST and an invitation to present on supply chain security to the US government.

Cloudera had a booth where I spent the better part of an hour discussing how different Big Data platforms can work together better to create a common standard for security assessors, as different staff came and went and suggested ideas. It felt like we were compressing three weeks of scheduled meetings into one impromptu intense planning session.

There are so many collaboration channels it can be overwhelming at some point because you simply can not pursue all the opportunities to be found at RSAC. If you want to meet with some of the best minds in the world trying to solve some of the hardest security problems, or you want to expose your ideas to a wide set of minds and collaborate in a short time, this conference can’t be beat. It’s massively massive, not a quiet walk in the park with known friends, and that’s not such a bad thing as our industry has to learn how to welcome in more and more people.

Our Digital Right to Die

With so many, so many, blog posts about Apple and FBI I have yet to see one get to the core issue.

Do we have a digital right to die? After we are dead, in other words, who controls the destiny of our data and what authority do we have over them?

Having been in the security industry for more than two decades I have worked extensively on this problem, not only because of digital forensics. Over the past five years we’ve developed some of the best technical solutions yet to help kill your data, forever, at massive scale.

The market has not seemed ready. Knowledge in this area has been for specialists.

Although I could bring up many cases and examples, most people do not run into them because discussion is usually around how to preserve things. The digital death is seen as edge or outlying situations (regulatory/legal compliance, dead soldier’s email, hiker’s cell phone, famous literary artist’s archives).

It feels like this is about to change, finally.

Everyone seems now to be talking about whether the FBI should be allowed to compel a manufacturer to disable a cell phone’s dead-man switch, for lack of a better term. A dead-man switch (or dead man’s, or kill switch) is able to operate automatically if the person who set it becomes incapacitated.

Dead-man switches can have sophisticated logic. Some are very simple. In the current news the cell phone uses a simple count. After several failed attempts to guess a PIN for a phone, the key needed to access data on that phone is erased.

Philosophically this situation presents a very difficult ethical question: Under what circumstances should law enforcement be able to disarm a dead-man switch to save data from deletion?

In this particular case we have a simple, known trigger in the dead-man switch. Bypassing it in principle is easy because you turn off the counter. Without a count the owner can try forever until they guess the PIN.

Complicating the case is that the vendor in question sells proprietary devices. They, by design, want to be the only shop with capability to modify their devices. They do not allow anyone to modify a device without their approval.

If there is any burden or effort here, arguably it is from such a business model to lock away knowledge needed to make the simple configuration change (stop the counter) to a complex device. Some see the change as a massive engineering effort, others say it is a trivial bit flip on existing code, yet no one is actually testing these theories because by design no one but the manufacturer is allowed to.

Further complicating the case is that the person using the device is dead, and technically the device is owned by someone else. Are we right to honor the intentions, unknown, of a dead person who set the dead-man switch over the living owner of the device who wants the switch disabled?

Let me put it this way. Your daughter dies suddenly. You forget the PIN to unlock the phone you gave her to communicate with you. You ask the vendor to please help disable the control that will kill your daughter’s data. Is it your data, because your device, or your daughter’s data?

If the vendor refuses to assist and you go to court, proving that you own the phone and the data is yours, do you have a case to compel the vendor to disable the control so that your data will not die?

What if the vendor says a change to the phone is a burden too great? What if they claim it would take an entirely new version of the iPhone operating system for them to make one trusted yet simple change to disable the dead-man counter? How would you respond to self-serving arguments that your need undermines their model?

It is not an easy problem to solve. This is not about two simple sides to chose from. Really it is about building better solutions for our digital right to die, which can be hard to do right, if you believe such a thing exists at all.

Updated to add reference to “kill switch” regulation:

Apple introduced Activation Lock in iOS 7. The feature “locks” iOS devices with the owner’s iCloud account credentials, and requires that they be authenticated with Apple before the device can be erased and set up again.

Activation Lock was the first commercially available “kill switch” for mobile operating systems, and similar features have since been implemented by Google and Samsung. California passed a law last August requiring that all smartphones sold in the state implement kill switches by July 2015, and an FCC panel in December recommended that the commission establish a similar nationwide framework, citing Activation Lock as model deterrent.

Polish Mathematicians Broke Nazi Enigma

Sadly this topic has remained a simmering controversy for far too long, mostly because of lack of effort on all our part. It isn’t hard to get it right, yet for some reason Poland isn’t getting credit due. The BBC in 2014 described a hugely important and historic event as simply a “quiet gathering”.

The debt owed by British wartime codebreakers to their Polish colleagues was acknowledged this week at a quiet gathering of spy chiefs. […] On the outskirts of Warsaw, some of the most senior spy bosses from Poland, France and Britain gathered this week in a nondescript but well-guarded building used by the Polish secret services. Their coming together was a way of marking the anniversary of a moment three-quarters of a century earlier when their predecessors held a meeting in Warsaw that played a crucial role in the victory over Hitler in World War Two.

I feel guilty. What have I done, as a historian of sorts, to help elevate this from quiet obscure ceremony to normalcy?

Mostly, for at least five years, I have bored friends with stories and tweeted about Poland’s contributions, which doesn’t feel like enough. So here’s my blog post to move the ball forward.

This is inspired by a new story in The Telegraph that the Polish government says more needs to be done.

Polish codebreakers ‘cracked Enigma before Alan Turing’
Diplomats say Poland’s key part in the deciphering the German system of codes in WWII has largely been overlooked

Time to stop overlooking. Let’s do this. Say it loud and proud, Poland broke the Nazi Enigma.

The Telegraph in 2012 versus 2016

News from The Telegraph in 2012 was: “Honour for overlooked Poles who were first to crack Enigma code”

…decades after Nazi Germany’s Enigma code was cracked, Poland has gone on the offensive to reclaim the glory of a cryptological success it feels has been unjustly claimed by Britain.

Frustrated at watching the achievements of the British wartime code-breakers at Bletchley Park lauded while those of Poles go overlooked, Poland’s parliament has launched a campaign to “restore justice” to the Polish men and women who first broke the Enigma codes.


The 2001 film Enigma, in particular, ruffled Polish feathers. The British production starring Kate Winslet and set in Bletchley Park made little mention of the Polish contribution to cracking the codes, and rubbed salt into the wounds by depicting the only Pole in the film as a traitor.

Some really good background in this 2012 article in The Telegraph. It is well written and accurate. Curious then how different it is from the story told to us in 2016.

Instead of pulling forward the earlier work, The Telegraph wrote a whole new version in 2016 filled with poorly researched ideas, pointing more towards the recent Turing movie, “The Imitation Game”.

Here are some questionable statements that jumped out at me.

Telegraph 2016: Poland Passed the Baton

…few people realise that early Enigma codes had already been broken by the Poles who then passed on the knowledge to Britain shortly before the outbreak of war.

It was not so simple. The Poles did not just pass along knowledge “shortly before” war. More to the point, given the escalation path of 1938, why was Britain waiting to the last moment before fall of Poland and declaration of war on Germany to receive crucial intelligence on German Enigma? Why were Brits far more focused on the Soviets as a threat instead of Germany, and why so interested in Spanish and Italian Enigmas instead of German?

Perhaps another way of asking this is what did the 1938 Munich Agreement, British appeasement of Nazi Germany, tell the Poles about trust in potential allies and giving away secrets?

Codebreakers from Britain early in 1939 had a kind of stalemate with Poland via talks setup by France. The three sides weren’t aligned exactly. Simply put it was British arrogance that led them to believe that their ability to break Enigma was best. When they met with the Polish the first time the British left thinking there was nothing they could gain.

Once war with Germany seemed unavoidable by summer of 1939, Poland simply ran out of time waiting for better terms of collaboration or warmer relations with British intelligence. Just before Germany rolled over Poland, codebreaking basically shifted to France, where negotiations continued with real alignment on German Enigma as the most pressing concern.

Months were basically wasted before the British were caught out as laggards and had to realize they had mistaken French and Polish cautions about Germany for incompetence. England realized their error fortunately before it was too late and rushed to learn from Poland, as war with Germany was announced.

Telegraph 2016: Poles Needed Help

By the time war broke out the Germans had increased the sophistication of the machine and the Poles were struggling to make more headway.

I hate the way this sounds. Hope it goes without saying Poles were struggling because…betrayal by Soviet defenses and invasion by Nazis while the world stood by and didn’t help. A highly secretive code-breaking team wasn’t going to just carry on effortlessly while their entire country was carved up and dismantled.

Sure the Germans had made a change, but that wasn’t the first time they altered Enigma (see Rajewski’s leading work on the Enigma Eintrittwalze – “entry wheel” – before the British figured it out, or transfer of Zygalski sheets to Bletchley, where they were known as Netz, short for Netz verfahren – “lattice method”). Difference by the time war broke out? The Polish had to destroy all their secret decoding systems and escape to France. I’ve read at first they tried to go to Britain and were denied due to confusion and secrecy (British embassy could not verify their roles). I’ve also read they went straight to France, where politics prevented them from moving to Bletchley. The bottom line is from the end of 1939 through early 1940 Turing and other Brits visited and studied Polish methods, learning of plans for new machines and preparing to build up operations in Bletchley Park.

“Struggling to make headway” is not a fair characterization relative to the many earlier mathematical struggles, which Poles obviously overcame on their own. The Poles had reconstructed Enigma and solved for daily keys. What made it hard to continue making headway? Staying under difficult conditions in Vichy France.

One of the original three who cracked Enigma, Rozycki, was killed in 1942 (lost at sea). The remaining Poles tried to escape to Spain that year. Langer, Ciezki and Palluth were captured by Germans. Rajewski and Zygalski escaped and landed in a Spanish prison. Only in 1943 these two finally enter England, where they were pushed aside into the Polish army in exile.

Struggling to make headway shouldn’t be blithely blamed on sophistication of the Enigma. Poles already had made plans to step up their game, which were handed over to England, as they tried to fight in Vichy France and stay alive.

Telegraph 2016: Blame Hollywood

…despite their help, history and Hollywood has largely ignored their role. The most recent film The Imitation Game, starring Benedict Cumberbatch, barely mentioned the Poles.

That’s right. And it’s a damn shame. Given that The Telegraph wrote in 2012 that a 2001 movie gave an unfair portrayal of the Poles, how did Imitation Game repeat the error? I found the movie highly disappointing.

Even more to the point there was in 2001 a book called “Stealing Secrets” that should have given Imitation Game producers all the background they needed on the true Turing story. Stealing Secrets doesn’t mince words here:

With the tide of the war having changed for the better, Bletchley’s leaders must have concluded in the cold calculus of realpolitik that is no longer had anything to gain from the Poles. […] Even now that the facts of the Poles’ Enigma breakthrough are out in the open, they must still compete in the marketplace of knowledge with earlier fictions. […] For a decade before the truth emerged about the Polish achievement, however, most of the English-speaking public was fed a steady diet of fiction masquerading as fact. […] Therefore, anyone who believes that Bletchley Park paved the road to victory in World War II must give credit to Poland for designing the road and mixing the pavement.

“Must give credit to Poland” as sage advice in 2001 and yet Imitation Game does none of that.

While visiting Bletchley Park I talked with the keepers about how Turing was portrayed relative to the Poles. They told me the film was rubbish and unfair. Their frankness surprised me and I found it refreshing. They basically had nothing good to say about the movie’s portrayal of events.

Telegraph 2016: Blame the Soviets

“We were trapped on the wrong side of the Iron Curtain during the Cold War which meant we did not get the credit that we should have received and nobody wanted to admit that anyone in Eastern Europe had anything to do with Enigma.

The Americans and English weren’t trapped by Soviets yet they too chose not to give credit. Does the world really need the Poles to repeatedly convince us of these facts as if the West doesn’t get it? And were the Poles blocked by Soviets? Sort of.

First, put this in terms of the 1940 Katyn massacre.

The Soviets in 1940 rounded up and assassinated 22,000 Polish military and intellectual elites (doctors, lawyers, professors), taking them into the woods and shooting them all in the back of the head. This massacre aimed to destroy any Polish resistance to Soviet control. America learned these details in 1943 from American POW forced by Germans to look at mass graves left behind by Soviets. Instead of bringing the news to light, the US kept it all a secret under the pretense of avoiding friction with Stalin.

That context makes it highly plausible the West was not about to credit Polish intellectuals for breaking Enigma when Stalin was around. But here’s the problem, nobody before the 1970s (20 years after Stalin) got public credit for cracking Enigma. There was literally no risk.

Second, put this in terms of the 1980 Solidarność.

Being on the wrong side of the Iron Curtain at that time is more relevant to our topic because that’s when Bletchley Park started leaking the stories. Now we’re talking about a prime time for strong characters and thaw stories, a time of Polish greatness and the Solidarity movement.

Remember the hardships the Polish cryptographers faced in 1940s France? None of them, even during German capture, leaked details of their work to anyone. Secrecy was crucial to success even after the end of the war. It was a top secret operation that only started to be verified more than 20 years after Stalin was out of the picture.

So really it isn’t about the Iron Curtain. It is about lazy historians in the West not doing a proper job with the facts. Blame is global and can’t be put on the Soviets repressing Poland’s voice, especially since we’re actually talking about the 1990s when these secret stories reached public sources; started to appeal to wider audiences. Still, Poland has to tell the world again and again until we accept it.

Telegraph 2016: Enigma is From End of WWI

The Enigma machine was invented by German engineer Arthur Sherbius at the end of the First World Wat [sic] and were used by the military and government of several countries.

Sherbius was applying for a patent for the Enigma in February 1918. WWI ended in November. Given events between those months I wouldn’t say Enigma came at the end. To me that would imply December or the start of 1919. There may even be some significance in timing relative to 1917; that was the year American scientist Vernam was given a task to invent a communication channel the Germans could not break, as patented in 1918. So “developed during the war” would be most appropriate in my mind.

In terms of several countries use…in 1927 the British government gave Enigma plans to Foss and Knox, code breakers, for review. A book about Knox’s role in breaking Enigma explains how Foss reported in theory it “could be broken given certain conditions” knowing as little as fifteen letters to figure out the machine settings. This effort led to the British and French working together on deciphering Spanish (Civil War) and Italian (invasion of Ethiopia) military communications in 1936.

Here’s the key issue. Britain was not as keen to monitor German Enigma traffic, despite it being the most advanced, until long after the French and Polish had warned of its importance. France was able to extract German documentation and gave it to Poland, who then cracked this most advanced Enigma by 1933. That should put in perspective Britain listening to “several countries” signals in 1936. That was the year Germany was pushing into Rhineland and getting no push-back from Britain.

Telegraph 2016: Poland Involvement Well Known in WWII

…Polish involvement was well known during World War Two but during the communist time it was not so convenient to admit that there had been so much cooperation between Britain and Poland. It was a very special and very secret alliance.

This just makes no sense to me. It was top secret work, as mentioned above. No one knew about involvement, except those working in secrecy who couldn’t tell anyone outside. The secrecy extended well into the 1970s. During the communist time is when the story was not actually known, rather than being a convenience issue.

Also, rather than “admit…so much cooperation” I’d call it acknowledge the lack of working relationship once the British realized the Polish were ahead and captured all their secrets, as forced by German invasion of France.

Revisting Bletchley Park

What really would be nice to see is Bletchley Park incorporate French and Polish exhibits, perhaps even curated by representatives from those countries, to give factual explanations of their roles. After all it is meant to be a place to read about the “allied” effort. The Park could benefit from the help for upkeep and maintaining records. Meanwhile, visitors would get a more robust and fair portrayal of a “world” war.

At some point maybe I’ll post my photos here from my trip there, which show some of the odd statements made by British historians, minimizing the efforts of the Polish.

Reasons Against Remembering

Some want to erase history to make others look good; ignoring the Polish role as Allies lets the British or Americans stand out more.

Some want to erase history to make themselves look less bad; ignoring Polish role as Axis lets the Germans stand out more.

Either way overlooking real Polish history is bad for WWII history as well as our understanding of security. Bringing facts forward today should have no risk.

If we give credit to Polish code-breakers we are not diminishing the still monumental contributions of Alan Turing during WWII. We can be more correct in the presentation of historic facts without much impact or edits to Bletchley Park.

When we give credit to those in Poland who fought against Nazis and did so much right, it does not mean we can forget wrongs done by others, such as Erich von Zelewski the Polish Nazi who proposed creation of Auschwitz (just one out more than 10,000 prisoner camps under Nazi control, let alone nearly 1,000 forced labor camps for Jews inside Poland). By 1946 Nuremburg trials this Polish Nazi testified while he had no issue with Jews sent to die in camps he had “tried to prevent the destruction of Warsaw” and his work “saved hundreds of thousands of civilians and tens of thousands of soldiers of Polish nationality”.

As more sunlight comes for the Poles who fought against Nazis, it may clear the air for us to also discuss and better understand their opposite, the Poles who collaborated. So far we have the book “Hunt for the Jews: Betrayal and Murder in German-Occupied Poland“, which discusses “how the Germans were able to mobilize segments of the Polish society to take part in their plan to hunt down the Jews”. And we have dramatization films like Ida and Poklosie (Aftermath)

The 1946 Kielce Pogrom provides a sad study of how some Poles continued to kill even after the war had ended to try and finish what Germans could not – elimination of Jews from Poland. With that in mind please note a bill has been introduced in Poland making it illegal to mention any Nazi collusion. Such a bill of denial would be a tragedy for those of us who try to bring out examples of bad as well as good and learn from the past.

Right now we should remember a Polish team of mathematicians working with human intelligence for what they were: the first to crack the Nazi Enigma.

As I said at the start, this is no quiet affair. Time to stop overlooking. Let’s do this. Say it loud and proud, Poland broke the Nazi Enigma.

Where is the Revolution in Intelligence? Public, Private or Shared?

Watching Richard Bejtlich’s recent “Revolution in Intelligence” talk about his government training and the ease of attribution is very enjoyable, although at times for me it brought to mind CIA factbook errors in the early 1990s.

Slides that go along with the video are available on Google drive

Let me say, to get this post off the ground, I will be the first one to stand up and defend US government officials as competent and highly skilled professionals. Yet I also will call out an error when I see one. This post is essentially that. Bejtlich is great, yet he often makes some silly errors.

Often I see people characterize a government as made up of inefficient troglodytes falling behind. That’s annoying. Meanwhile often I also see people lionize nation-state capabilities as superior to any other organization. Also annoying. The truth is somewhere in between. Sometimes the government does great work, sometimes it blows compared to private sector.

Take the CIA factbook I mentioned above as an example. It has been unclassified since the 1970s and by the early 1990s it was published on the web. Given wider distribution its “facts” came under closer scrutiny from academics. So non-gov people who long had studied places or lived in them (arguably the world’s true leading experts) read this fact book and wanted to help improve it — outsiders looking in and offering assistance. Perhaps some of you remember the “official” intelligence peddled by the US government at that time?

Bejtlich in his talk gives a nod towards academia being a thorough environment and even offers several criteria for why academic work is superior to some other governments (not realizing he should include his own). Perhaps this is because he is now working on a PhD. I mean it is odd to me he fails to realize this academic community was just as prolific and useful in the 1990s, gathering intelligence and publishing it, giving talks and sending documents to those who were interested. His presentation makes it sound like before search engines appeared it required nation-state sized military departments walking uphill both ways in a blizzard to gather data.

Aside from having this giant blind spot to what he calls the “outsider” community, I also fear I am listening to someone with no field experience gathering intelligence. Sure image analysis is a skill. Sure we can sit in a room and pore over every detail to build up a report on some faraway land. On one of my private sector security teams I had a former US Air Force technician who developed film from surveillance planes. He hated interacting with people, loved being in the darkroom. But what does Bejtlich think of actually walking into an environment as an equal, being on the ground, living among people, as a measure of “insider” intelligence skill?

Almost three decades ago I stepped off a plane into a crowd of unfamiliar faces in a small country in Asia. Over the next five weeks I embedded myself into mountain villages, lived with families on the great plains, wandered with groups through jungles and gathered as much information as I could on the decline of monarchial rule in the face of democratic pressure.

One sunny day on the side of a shoulder-mountain stands out in my memory. As I hiked down a dusty trail a teenage boy dressed all in black walked towards me. He carried a small book under his arm. He didn’t speak English. We communicated in broken phrases and hand gestures. He said he was a member of a new party.

Mao was his leader, he said. The poor villages felt they weren’t treated well, decided to do something about it. I asked about Lenin. The boy had never heard the name. Stalin? Again the boy didn’t know. Mao was the inspiration for his life and he was pleased about this future for his village.

This was before the 1990s. And by most “official” accounts there were no studies or theories about Maoists in this region until at least ten years later. I mention this here not because individual people with a little fieldwork can make a discovery. It should be obvious military schools don’t have a monopoly on intel. The question is what happened to that data. Where did information go and who asked about it? Did others have easy access to data gathered?

Yes, someone from private sector should talk about “The Revolution in Private Sector Intelligence”. Perhaps we can find someone with experience working on intelligence in the private sector for many, many years, to tell us what has changed for them. Maybe there will be stories of pre-ChoicePoint private sector missions to fly in on a moment’s notice into random places to gather intelligence on employees who were stealing money and IP. And maybe non-military experience will unravel why Russian operations in private sector had to be handled uniquely from other countries?

Going by Bejtlich’s talk it would seem that such information gathering simply didn’t exist if the US government wasn’t the one doing it. What I hear from his perspective is you go to a military school that teaches you how to do intelligence. And then you graduate and then you work in a military office. Then you leave that office to teach outsiders because they can learn too.

He sounds genuinely incredulous to discover that someone in the private sector is trainspotting. If you are familiar with the term you know many people enjoy as a hobby building highly detailed and very accurate logs of transportation. Bejtlich apparently is unaware, despite this being a well-known thing for a very long time.

A new record of trainspotting has been discovered from 1861, 80 years earlier than the hobby was first thought to have begun. The National Railway Museum found a reference to a 14 year old girl writing down the numbers of engines heading in and out of Paddington Station.

It reminds me a bit of how things must have moved away from military intelligence for the London School of Oriental and African Studies (now just called SOAS). The British cleverly setup in London a unique training school during the first World War, as explained in the 1917 publication “Nature”:

…war has opened our eyes to the necessity of making an effort to compete vigorously with the activities — political, commercial, and even scientific and linguistic — of the Germans in Asia and Africa. We have discovered that their industry was rarely disinterested, and that political propaganda was too often at the root of “peaceful penetration” in the field of missionary, scientific, and linguistic effort.

In other words, a counter-intelligence school was born. Here the empire could maintain its military grip around the world by developing the skills to better gather intelligence and understand enemy culture (German then, but ultimately native).

By the 1970s SOAS, a function of the rapidly changing British global position, seemed to take on wider purpose. It reached out and looked at new definitions of who might benefit from the study and art of intelligence gathering. By 1992 regulars like you or me could attend and sit within the shell of the former hulk of a global analysis engine. Academics there focused on intelligence gathering related to revolution and independence (e.g. how to maintain profits in trade without being a colonial power).

I was asked by one professor to consider staying on for a PhD to help peel apart Ghana’s 1956 transition away from colonial rule, for only academic purpose of course. Tempted as I was, LSE instead set the next chapters of my study, which itself seems to have become known sometime during the second World War as a public/private shared intelligence analyst training school (Bletchley Park staff tried to convince me Zygalski, inventor of equipment to break the Enigma, lectured at LSE although I could find no records to support that claim).

Fast forward five years to 1997 and the Corner House is a good example of academics in London who formalized public intelligence reports (starting in 1993?) into a commercial portfolio. In their case an “enemy” was more along the lines of companies or even countries harming the environment. This example might seem a bit tangential until you ask someone for expert insights, including field experience, to better understand the infamous pipeline caught in a cyberwar.

Anyway, without me dragging on and on about the richness of an “outside” world, Bejtlich does a fine job describing some of the issues he had adjusting. He just seems to have been blind to communities outside his own and is pleased to now be discovering them. His “inside” perspective on intelligence is really just his view of inside/outside, rather than any absolute one. Despite pointing out how highly he regards academics who source material widely he then unfortunately doesn’t follow his own advice. His talk would have been so much better with a wee bit more depth of field and some history.

Let me drag into this an interesting example that may help make my point, that private analysts not only can be as good or better than government they may even be just as secretive and political.

Eastman Kodak investigated, and found something mighty peculiar: the corn husks from Indiana they were using as packing materials were contaminated with the radioactive isotope iodine-131 (I-131). Eastman Kodak at the time had some of the best researchers in the country on its team (the company even had its own nuclear reactor in the 1970s), and they discovered something that was not public knowledge: those farms in Indiana had been exposed to fallout from the 1945 Trinity Test in New Mexico — the world’s first atmospheric nuclear bomb explosions which ushered in the atomic age. Kodak kept this exposure silent.

The American film industry giant by 1946 realized, from clever digging into the corn husk material used for packaging, that the US government was poisoning its citizens. The company filed a formal complaint and kept quiet. Our government responded by warning Kodak of military research to help them understand how to hide from the public any signs of dangerous nuclear fallout.

Good work by the private sector helping the government more secretly screw the American public without detection, if you see what I mean.

My point is we do not need to say the government gives us the best capability for world-class intelligence skills. Putting pride aside there may be a wider world of training. So we also should not say private-sector makes someone the best in world at uncovering the many and ongoing flaws in government intelligence. Top skills can be achieved in different schools of thought, which serve different purposes. Kodak clearly worried about assets differently than the US government, while they still kind of ended up worrying about the same thing (colluding, if you will). Hard to say who evolved faster.

By the way, speaking of relativity, also I find it amusing Bejtlich’s talk is laced with his political preferences as landmines: Hillary Clinton is setup as so obviously guilty of dumb errors you’d be a fool not to convict her. President Obama is portrayed as maliciously sweeping present and clear danger of terrorism under the carpet, putting us all in grave danger.

And last but not least we’re led to believe if we get a scary black bag indicator we should suspect someone who had something to do with Krav Maga (historians might say an Austro-Hungarian or at least Slovakian man, but I’m sure we are supposed to think Israeli). Is that kind of like saying someone who had something to do with Karate (Bruce Lee!) when hinting at America?

And one last thought. Bejtlich also mentions gathering intelligence on soldiers in the Civil War as if it would be like waiting for letters in the mail. In fact there were many more routes of “real time” information. Soldiers were skilled at sneaking behind lines (pun not intended) tapping copper wires and listening, then riding back with updates. Poetry was a common method of passing time before a battle by creating clever turns of phrase about current events, perhaps a bit like twitter functions today. “Deserters” were a frequent source of updates as well, carrying news across lines.

I get what Bejtlich is trying to say about speed of information today being faster and have to technically agree with that one aspect of a revolution; of course he’s right about raw speed of a photo being posted to the Internet and seen by an analyst. Yet we shouldn’t under-sell what constituted “real-time” 150 years ago, especially if we think about those first trainspotters…

Hillary, Official Data Classification, and Personal Servers

The debate over Hillary Clinton’s use of email reminds me of a Goldilocks’ tech management dilemma. Users tend to think you are running too slow or too fast, never just right:

Too slow

You face user ire, potential revolt, as IT (let alone security) becomes seen as the obstacle to progress. Users want access to get their job done faster, better, etc. so they push data to cloud and apps, bring in their own devices and run like they have no fear because trust is shifted into clever new service providers.

We all know that has been the dominant trend and anyone caught saying “blackberry is safer” is at risk of being kicked out of the cool technology clubs. Even more to the point you have many security thought leaders saying over and over to choose cloud and ipad because safer.

I mentioned this in a blog post in 2011 when the Apple iPad was magically “waived” through security assessments for USAID.

Today it seems ironic to look back at Hillary’s ire. We expect our progressive politicians to look for modernization opportunities and here is a perfect example:

Many U.S. Agency for International Development workers are using iPads–a fact that recently drew the ire of Secretary of State Hillary Clinton when she sat next to a USAID official on a plane, said Jerry Horton, chief information officer at USAID. Horton spoke April 7 at a cloud computing forum at the National Institute of Standards and Technology in Gaithersburg, Md.

Clinton wanted to know why a USAID official could have an iPad while State Department officials still can’t. The secret, apparently, lies in the extensive use of waivers. It’s “hard to dot all the Is and cross all the Ts,” Horton said, admitting that not all USAID networked devices are formally certified and accredited under Federal Information Security Management Act.

“We are not DHS. We are not DoD,” he said.

While the State Department requires high-risk cybersecurity, USAID’s requirements are much lower, said Horton. “And for what is high-security it better be on SIPR.”

Modernizing, innovating, asking for government to reform is a risky venture. At the time I don’t remember anyone saying Hillary was being too risky, or her ire was misplaced in asking for technology improvements. There was a distinct lack of critique heard, despite my blog post sitting in the top three search results on Google for weeks. If anything I heard the opposite, that the government should trust and catch up to Apple’s latest whatever.

Too fast

Now let’s look at the other perspective. Dump the old safe and trusted Blackberry so you can let users consume iPads like candy going out of style, and you face watching them stumble and fall on their diabetic face. Consumption of data is the goal and yet it also is the danger.

Without getting into too many of the weeds for the blame game, figuring out who is responsible for a disaster, it may be better to look at why there will be accidents/misunderstandings in a highly politicized environment.

What will help us make sure we avoid someone extracting data off SIPR/NIPR without realizing there is a “TS/SAP” classification incident ahead? I mean what if the majority of data in question pertain to a controversial program, let say for example drones in Pakistan, which may or may not be secret depending on one’s politics. Colin Powell gives us some insight to the problem:

…emails were discovered during a State Department review of the email practices of the past five secretaries of state. It found that Powell received two emails that were classified and that the “immediate staff” working for Rice received 10 emails that were classified.

The information was deemed either “secret” or “confidential,” according to the report, which was viewed by CNN.

In all the cases, however — as well as Clinton’s — the information was not marked “classified” at the time the emails were sent, according to State Department investigators.

Powell noted that point in a statement on Thursday.

“The State Department cannot now say they were classified then because they weren’t,” Powell said. “If the Department wishes to say a dozen years later they should have been classified that is an opinion of the Department that I do not share.”

“I have reviewed the messages and I do not see what makes them classified,” Powell said.

This classification game is at the heart of the issue. Reclassification happens. Aggregate classification of not secret data can make it secret. If we characterize it as a judgment flaw by only one person, or even three, we may be postponing the critical need to review where there are wider systemic issues in decision-making and tools.

To paraphrase the ever insightful Daniel Barth-Jones: smart people at the top of their political game who make mistakes aren’t “stupid”; we have to evaluate whether systems that don’t prevent mistakes by design are….

Just right

Assuming we agree want to go faster than “too slow”, and we do not to run ahead “too fast” into disasters…a middle ground needs to come into better focus.

Giving up “too slow” means a move away from blocking change. And I don’t mean achieving FISMA certification. That is seen as a tedious low bar for security rather than the right vehicle for helping push towards the top end. We need to take compliance seriously as a guide as we also embrace hypothesis, creative thinking, to tease out a reasonable compromise.

We’re still very early in the dinosaur days of classification technology, sitting all the way over by the slow end of the equation. I’ve researched solutions for years, seen some of the best engines in the world (Varonis, Olive), and it’s not yet looking great. We have many more tough problems to solve, leaving open a market ripe for innovation.

Note the disclaimer on Microsoft’s “Data Classification Toolkit

Use of the Microsoft Data Classification Toolkit does not constitute advice from an auditor, accountant, attorney or other compliance professional, and does not guarantee fulfillment of your organization’s legal or compliance obligations. Conformance with these obligations requires input and interpretation by your organization’s compliance professionals.

Let me explain the problem by way of analogy, to be brief.

Cutting-edge research on robots focuses on predictive capabilities to enable driving off-road free from human control. A robot starts with near-field sensors, which gets them about 20 feet of vision ahead to avoid immediate danger. Then the robot needs to see much further to avoid danger altogether.

This really is the future of risk classification. The better your classification of risks, the better your predictive plan, and the less you have to make time-pressured disaster avoidance decisions. And of course being driver-less is a relative term. These automation systems still need human input.

In a DARPA LAGR Program video the narrator puts it simply:

A short-sighted robot makes poor decisions

Imagine longer-range vision algorithms that generate an “optimal path”, applied to massive amounts of data (different classes of email messages instead of trees and rocks in the great outdoors), dictating what you actually get to see.


What I like about this optimal path illustration is the perpendicular alignment of two types of vision. The visible world is flat. And then there is the greater, optimal path theory, presented as a wall-like circle, easily queried without actually being “seen”. This is like putting your faith in a map because you can’t actually see all the way from San Francisco to New York.

The difference between the short and long highlights why any future of safe autonomous systems will depend on processing power of the end nodes, such that they can both create a larger areas of more “flat” rings as well as build out the “taller” optimal paths.

Here is where “personal” servers come into play. Power becomes a determinant of vision and autonomy. Personal investments often can increase processing power faster than government bureaucracy and depreciation schedules. I mean if the back-end system looks at the ground ahead and classifies as sand (unsafe to proceed), and the autonomous device does its own assessment on its own servers and decides it is looking at asphalt (safe for speed), who is right?

The better the predictive algorithms the taller the walls of vision into the future, and that begs for power and performance enhancements. Back to the start of this post, when IT isn’t providing users the kind of power they want for speed, we see users move their workloads towards BYOD and cloud. Classification becomes a power struggle, as forward-looking decisions depend on reliable data classification from an authoritative source.

If authoritative back-end services accidentally classify data safe and later reverse to unsafe (or vice-versa) the nodes/people depending on a classification service should not be the only target in an investigation of judgement error.

We can joke about how proper analysis always would chose a “just right” Goldilocks long-term path, yet in reality the debate is about building a high-performance data classification system that reduces her cost of error.

BBC’s false history of long distance communication

One might think history would be trivially easy, given how these days every fact is on the Internet at the tips of our fingers. However, being a historian still takes effort, perhaps even talent. Why?

The answer is simple: “the value of education is not the learning of many facts but the ability of the mind to think”. I’ll let you try and search to figure out the person who said that.

A historian is trained to apply expertise in thinking, run facts through a system of sound logic for others to validate, rather than just leave facts on their own. It is a bit like a chef cooking a delicious meal rather than offering you a bowl of raw ingredients. Analysis to get the right combinations of ingredients cooked together can be hard. And on top of finding the results desirable, we also need ways to know the preparations were clean an can be trusted.

Take for example a BBC magazine article written about long distance communication, that cooks up a soup called “How Napoleon’s semaphore telegraph changed the world”.

This article unfortunately offers factual conclusions that are poorly prepared and end up tasting all wrong. Let’s start with three basic assertions the BBC has asked readers to swallow:

  1. The last stations were built in 1849, but by then it was clear that the days of line-of-sight telegraphy were done.
  2. The military needs had disappeared, and latterly the operators’ main task was transmitting national lottery numbers.
  3. The shortcomings of visual communication were obvious. It only functioned in daytime and in good weather.

First point: Line-of-sight telegraphy is still used to this day. Anyone sailing the Thames, or any modern waterway for that matter, would happily tell you they rely on a system of lights and flags. I wrote it into our book on cloud security. The BBC itself has a story about semaphore adoption during nuclear disarmament campaigns. As long as we have visual sensors, these signal days will never be done. Dare I mention the line-of-sight communication scene in a futuristic sci-fi film The Martian?

Second point: Military needs are not the only need. This should be obvious from the first point, as well as from common sense. If this were true you would not be reading a blog, ever. More to the stupidity of this reasoning, the French system resorted to a lottery because it went bankrupt. The inventor had pinned all his hope for a very expensive system on military financing and that didn’t come through. So the lottery was a last-ditch attempt to find support after the military walked.


A sad footnote to this is the French military didn’t see the Germans coming in latter wars. So I could dive into why military needs didn’t disappear, but that would be more complicated than proving there were other needs and the system just wasn’t funded properly to survive.

Third point: Anyone heard of a lighthouse? What does it do best? Functions at night and in bad weather, am I right? Fires on a hill (e.g. pyres) also work quite well at night. Or a flashlight, such as the one on your cell-phone.

Try out the Jolla phone app “Morse sender” if you want to communicate over distance at night and bad weather using Morse code. Real shortcomings of visual communication come during thick smoke (e.g. old gunpowder battles or near coal power), which leads to audio signals such as the talking drum, fog horns, bagpipes and songs or cries.

Ok, so all those three above points are false and easily disproved, tossed into the bin. Now for the harder part, the overall general conclusion in two sentences from BBC magazine:

Smoke, fire, light, flags – since time immemorial man had sought to speak over space.

What France did in the first half of the 19th Century was create the first ever system of distance communication.

Shame that the writer acknowledges fire and flags here because those are the facts we used above to disprove their own analysis (work at night, still in use). Now can we disprove “first ever system of distance communication”?

I say this is hard because I’m giving the writer benefit of the doubt. Putting myself in their shoes they obviously see a big difference between the “immemorial” methods used around the world and a brief French experiment with an expensive, unfunded militaristic system.

As hard as I try, honestly I don’t see why we should call the French system first. Consider this passage from archaeologist Charles Jones’ 1873 “Antiquities of the Southern Indians


Note this is a low-cost and night-time resilient system that leaves no trace. Pretty damning evidence of being earlier and arguably better. We have fewer first-hand proofs from earlier yet it would be easy to argue there were complex fire signals as far back as 150 BCE.

The Greek historian Polybius explained in The Histories that fire signals were used to convey complex messages over distance via cipher. A flame would be raised and lowered, turned on or off, to signal column and row of a letter.

6 The most recent method, devised by Cleoxenus and Democleitus and perfected by myself, is quite definite and capable of dispatching with accuracy every kind of urgent messages, but in practice it requires care and exact attention. 7 It is as follows: We take the alphabet and divide it into five parts, each consisting of five letters. There is one letter less in the last division, but this makes no practical difference. 8 Each of the two parties who are about signal to each other must now get ready five p215tablets and write one division of the alphabet on each tablet, and then come to an agreement that the man who is going to signal is in the first place to raise two torches and wait until the other replies by doing the same. 10 This is for the purpose of conveying to each other that they are both at attention. 11 These torches having been lowered the dispatcher of the message will now raise the first set of torches on the left side indicating which tablet is to be consulted, i.e. one torch if it is the first, two if it is the second, and so on. 12 Next he will raise the second set on the right on the same principle to indicate what letter of the tablet the receiver should write down.

It even works at night and in bad weather!

Speaking of which there may even have been a system earlier, such as 247 BCE. Given the engineering marvel of the lighthouse Pharos of Alexandria, someone may know better of its use for long-distance communication by line-of-sight.

Has the point been made that the first ever system of distance communication was not the French during their revolution?

I think the real conclusion here, in consideration of BBC magazine’s attempt to persuade us, is someone was digging for reasons to be proud of French militarism. Had they bothered to think more deeply or seek more global sources of data they might have avoided releasing such a disappointing article.

When native Americans demonstrated excellent long distance communication systems, European settlers mocked them. Yet the French build one and suddenly we’re supposed to remember it and say…oh la la? No thanks, too hard to swallow. That’s poor analysis of facts.

the poetry of information security