Skip to content


Movie Review: JSA (Joint Security Area)

A South Korean soldier slowly hands a shiny mechanical lighter to a North Korean soldier, as if to give thanks through transfer of better technology. The North Korean lights a cigarette and contemplates the object. The South Korean clarifies its value as “you can see yourself in the reflection; see how clean your teeth are”. This movie is full of clever and humorous juxtapositions, similar to questioning urban liberal versus rural conservative values.

220px-Jsa.movistThe area known as JSA (Joint Security Area) is a small section of the Demilitarized Zone (DMZ) between North and South Korea. The two countries have their military stationed literally standing face-to-face just a few feet from each other. Buildings in the area have served as meeting space, brokered by international oversight, and there is palpable tension in the air.

This movie draws the viewer into this feeling and the lives of soldiers suspended by two countries within an old armistice and trying to find ways around it; men and women trapped inside an internationally monitored agreement to postpone hostilities. Primary roles are played by just four soldiers, two North and two South. Also stepping up to the dance are the investigators and observers, positioned in an awkward third role between the two sides.

The NNSC (Neutral Nations Supervisory Commission) and the US have a dominant secondary tier of influence to the dialogue. I found no mention of other global players, such as China or Russia. Perhaps the absence of these countries is explained by the fact this movie was released in 2000. Today it might be a different story.

Directed by Park Chan-wook the cultural perspective and references clearly are South Korean. North Korea is portrayed in a surprising light as the more thoughtful and grounded of the two countries. While the South is shown to be obsessed with shallow perfections, looking at itself and boasting about false success, roles played by the North are either weary and wise or kind and naive. It is the US and UN that come out being the real villains in the script, perpetuating a civil war that would heal if only allowed by outside meddlers.

What comes across to me is a third-generation war movie; a Tarantino-syle M*A*S*H. There is a strong pacifist-irony thread, clearly influenced by Tarantino’s style of borrow and remix old scenes from popular war/gangster movies using today’s direct approach. No subtlety will be found. The viewer is granted displays of full-gore slow-motion blood-splattering scenes of useless death, the sort of lens Tarantino developed as he grew up working in a Los Angeles video-rental store. John Wayne, for example, is played by the North Korean sergeant…

Chan-wook is quoted saying his movies highlight “the utter futility of vengeance and how it wreaks havoc on the lives of everyone involved”.

Despite the gore and sometimes strained irony, the film is suspenseful and on-target with much of its commentary. It offers a counter-intuitive story that veers uncomfortably close to glorifying the North and vilifying the US, delivering over-simplifications of civil war. This is exactly the sort of popular cartoonist perspective many of us need to take into consideration, forcing a rethink of how “the dark side” is portrayed. If Marvel were to dream up a superhero of South Korean origin, it may have more shades of this plot than anything a US director would ever allow.

I give it four out of an unspecified number of penguins.

Posted in Security.


A Political Science TL;DR for InfoSec

More and more often I see those experienced in technology address issues of political science. A malware reverser will speculate on terrorist motives. An expert with network traffic analysis will make guesses about organized crime operations.

When a journalist asks an expert in information security to explain the human science of an attack, such as groups and influences involved, the answers usually are quips and jabs instead of being based on science or study.

This is unfortunate because I suspect with a little reading or discussion things would improve dramatically. My impression is there is no clear guide, however, and when I raise the issue I’ve been asked to put something together. So, since I spent my undergraduate and graduate degrees on political philosophy (ethics of humanitarian intervention), perhaps I can help here in the ways that I was taught.

Let me give a clear example, which recently fell on my desk.

Say Silent Chollima One More Time

About two years ago a private company created by a wealthy businessman, using strong ties to the US government, was launched with ambitious goals to influence the world of information security investigations.

When 2013 kicked off CrowdStrike was barely known outside of inner-sanctum security circles. The stealth startup–founded by former Foundstone CEO, McAfee CTO, and co-author of the vaunted Hacking Exposed books George Kurtz–was essentially unveiled to the world at large at the RSA Security Conference in February.

Just two years after being formed, note how they describe the length of their projects, slyly claiming a start six years before existence.

Interviewer: What do you make of the FBI finding — and the president referred to it — that North Korea and North Korea alone was behind this attack?

CrowdStrike: At CrowdStrike, we absolutely agree with that. We have actually been tracking this actor. We actually call them Silent Chollima. That’s our name for this group based that is out of North Korea.

Interviewer: Say the name again.

Crowdstrike: Silent Chollima. Chollima is actually a national animal of North Korea. It’s a mythical flying horse. And we have been tracking this group since 2006.

Hold on to that mythical flying horse for a minute.

To be fair, CrowdStrike may have internally blended their own identity so much with the US government they do not realize those of us outside cringe when they blur the line between a CrowdStrike altar and our state. I think hiring many people away from the US government still does not excuse such casual use of “we” when speaking about intelligence from before the 2013 company launch.

Word use and definitions matter greatly to political scientists. I will dig in to explain why. Take a closer look at that reference to a mythological flying horse. CrowdStrike adds heavy emphasis where none is required. They want everyone to take note of what “we actually call” suspects without any sense of irony for propagandist methods.

Some of it may be just examples in insensitive or silly labeling for convenience, rather than outright propaganda designed to change minds. Here’s their “meet the adversaries” page.

animal-adversaries

Anyone else find it strange that the country of Tiger is an India? What is an India? Ok, seriously though, only Chollima gets defined? I have to look up what kind of animal an India is?

Iran (Persia) being called a Kitten and India the Tiger is surely meant as some light-hearted back-slapping comedy to lighten up the mood in CrowdStrike offices. Long nights poring over forensic material, might as well start filing with pejorative names for foreign indicators because, duh, adversaries.

Political scientists say the words used to describe a suspect before a trial heavily influence everyone’s views. An election also has this effect. Pakistan has some very interesting long-term studies of voting results from ballots for the illiterate, where candidates are assigned an icon.

Imagine a ballot where you have to choose between a chair and a kitten.

Vote for me!
Vote for me!

CrowdStrike makes no bones about the fact they believe someone they suspect will be considered guilty until proven innocent by others. This unsavory political philosophy comes through clearly in another interview (where they also take a moment to mention Chollima):

We haven’t seen the skeptics produce any evidence that it wasn’t North Korea, because there is pretty good technical attribution here. […] North Korea is one of the few countries that doesn’t have a real animal as a national animal. […] Which, I think, tells you a lot about the country itself.

I’ll leave the “pretty good technical attribution” statement alone here because I want to deal with that in a separate post. Let’s break the rest into two parts.

Skeptics Haven’t Produced Evidence

First, to the challenge for skeptics to produce counter-evidence, Bertrand Russell eloquently and completely destroyed such reasoning long ago. His simple celestial teapot analogy speaks for itself.

If I were to suggest that between the Earth and Mars there is a china teapot revolving about the sun in an elliptical orbit, nobody would be able to disprove my assertion provided I were careful to add that the teapot is too small to be revealed even by our most powerful telescopes. But if I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense.

This is the danger of ignoring lessons from basic political science, let alone its deeper philosophical underpinnings; you end up an information security “thought leader” talking absolute nonsense. CrowdStrike may as well tell skeptics to produce evidence attacks aren’t from a flying horse.

The burden of proof logically and obviously remains with those who sit upon an unfalsifiable belief. As long as investigators offer statements like “we see evidence and you can’t” or “if only you could see what we see” then the burden can not easily and so negligently shift away.

Perhaps I also should bring in the proper, and sadly ironic, context to those who dismiss or silence skepticism.

Studies of North Korean politics emphasize their leaders often justify total control while denying information to the public, silencing dissent and making skepticism punishable. In an RT documentary, for example, North Korean officers happily say they must do as they are told and they would not question authority because they have only a poor and partial view; they say only their dear leader can see all the evidence.

Skepticism should not be rebuked by investigators if they desire, as scientists tend to, for challenges to help them find truth. Perhaps it is fair to say CrowdStrike takes the very opposite approach of what we often call crowd source?

Analysts within the crowd who speak out as skeptics tend to be most practiced in the art of accurate thought, precisely because caution and doubt are not dismissed. Incompleteness is embraced and examined. This is explained with recent studies. Read, for example, a new study called “Psychology of Intelligence Analysis: Drivers of Prediction Accuracy in World Politics” that highlights how and why politics alter analyst conclusions.

Analysts also operate under bureaucratic-political pressure and are tempted to respond to previous mistakes by shifting their response thresholds. They are likelier to say “signal” when recently accused of underconnecting the dots (i.e., 9/11) and to say “noise” when recently accused of overconnecting the dots (i.e., weapons of mass destruction in Iraq). Tetlock and Mellers (2011) describe this process as accountability ping-pong.

Then consider an earlier study regarding what makes people into and “superforecasters” when they are accountable to a non-political measurement.

…accountability encourages careful thinking and reduces self-serving cognitive biases. Journalists, media dons and other pundits do not face such pressures. Today’s newsprint is, famously, tomorrow’s fish-and-chip wrapping, which means that columnists—despite their big audiences—are rarely grilled about their predictions after the fact. Indeed, Dr Tetlock found that the more famous his pundits were, the worse they did.

CrowdStrike is as famous as they get, as designed from launch. Do they have any non-political, measured accountability?

Along with being skeptical, analysts sometimes are faulted for being grouchy. It turns out in other studies that people in bad moods remember more detail in investigations and provide more accurate facts, because they are skeptical. The next time you want to tell an analyst to brighten up, think about the harm to the quality of their work.

Be skeptical if you want to find the right answers in complex problems.

A Country Without a Real Animal

Second, going back to the interview statement by CrowdStrike, “one of the few countries” without “a real animal as a national animal” is factually easy to confirm. It seems most obviously false.

With a touch of my finger I find mythical national animals used in England, Scotland, Bhutan, China, Greece, Hungary, Indonesia, Iran, Portugal, Russia, Turkey, Vietnam…and the list goes on.

Even if I try to put myself in the shoes of someone making this claim I find it impossible to see how use of national mythology could seem distinctly North Korean to anyone from anywhere else. It almost makes me laugh when I think this is a North Korean argument for false pride: “only we have a mythological national animal”.

The reverse also is awkward. Does anyone really vouch for a lack of a national animal for this territory? In those mythical eight years of CrowdStrike surveillance (or in real time, two years) did anyone notice, for example, some Plestiodon coreensis stamps or the animation starring Sciurus vulgaris and Martes zibellina?

And then, right off the top of my head I think of national mythology frequently used in Russia (two-headed monster) and England (monster being killed):

russiastgeorgedragon2s

And then the Houston Astros playing the Colorado Rockies pop into my head. Are we really supposed to cheer for a mythical mountain beast, some kind of anthropomorphic purple triceratops, or is it better to follow the commands of a green space alien with antennae? But I digress.

At this point I am unsure whether to go on to the second half of the CrowdStrke statement. Someone who says national mythical animals are unique to North Korea is in no position to assert it “tells you a lot about the country itself”.

Putting myself again in their shoes, CrowdStrike may think they convey “fools in North Korea have false aspirations; people there should be more skeptical”.

Unfortunately the false uniqueness claim makes it hard to unravel who the fools really are. A little skepticism would have helped CrowdStrike realize mythology is universal, even at the national level. So what do we really learn when a nation has evidence of mythology?

In my 2012 Big Data Security presentations I touched on this briefly. I spoke to risks of over-confidence and belief in data that may undermine further analytic integrity. My example was the Griffin, a mythological animal (used by the Republic of Genoa, not to mention Greece and England).

Recent work by an archeologist suggests these legendary monsters were a creative interpretation by Gobi nomads of Protocerotops bones. Found during gold prospecting the unfamiliar bones turned into stories told to those they traded with, which spread further until many people were using Griffins in their architecture and crests.

Ok, so really mythology tells us that people everywhere are creative and imaginative with minds open to possibilities. People are dreamers and have aspirations. People stretch the truth and often make mistakes. The question is whether at some point a legend becomes hard or impossible to disprove.

A flying horse could symbolize North Koreans are fooled by shadows, or believe in legends, but who among us is not guilty of creativity to some degree? Creativity is the balance to skepticism and helps open the mind to possibilities not yet known or seen. It is not unique to any state but rather essential to the human pursuit of truth.

Be creative if you want to find the right answers in complex problems.

Power Tools

Intelligence and expertise in security, as you can see, does not automatically transfer to a foundation for sound political scientific thought. Scientists often barb each other about who has more difficult challenges to overcome, yet there are real challenges in everything.

I think it important to emphasize here that understanding human behavior is very different skill. Not a lessor skill, a different one. XKCD illustrates how a false or reverse-confidence test is often administered:

XKCD Imposter

Being the best brain surgeon does not automatically make someone an expert in writing laws any more than a political scientist would be an expert at cutting into your skull.

Basic skills in everything can be used to test for fraud (imposter exams) while the patience in more nebulous and open advanced thinking in every field can be abused. Multiplication tables for math need not be memorized because you can look them up to find true/false. So too with facts in political science, as I illustrated with mythology and symbolism for states. Quick, what’s your state animal?

Perhaps it is best said there are two modes to everything: things that are trivial and things that are not yet understood. The latter is what people mean when they say they have found something “sophisticated”.

There really are many good reasons for technical experts to quickly bone up on the long and detailed history of human science. Not least of them is to cut down propaganda and shadows, move beyond the flying horses, and uncover the best answers.

The examples I used above are very specific to current events in order to clarify what a problem looks like. Hopefully you see a problem to be solved and now are wondering how to avoid a similar mistake. If so, now I will try to briefly suggest ways to approach questions of political science: be skeptical, be creative. Some might say leave it to the professionals, the counter-intelligence experts. I say never stop trying. Do what you love and keep doing it.

Achieving a baseline to parse how power is handled should be an immediate measurable goal. Can you take an environment, parse who the actors are, what groups they affiliate with and their relationships? Perhaps you see already the convenient parallels to role based access or key distribution projects.

Aside from just being a well-rounded thinker, learning political science means developing powerful analytic tools that quickly and accurately capture and explain how power works.

Being Stateful

Power is the essence of political thought. The science of politics deals with understanding systems of governing, regulating power, of groups. Political thinking is everywhere, and has been forever, from the smallest group to the largest. Many different forms are possible. Both the framework of the organization and leadership can vary greatly.

Some teach mainly about relationships between states, because states historically have been a foundation to generation of power. This is problematic as old concepts grow older, especially in IT, given that no single agreed-upon definition of “state” yet exists.

Could users of a service ever be considered a state? Google might be the most vociferously and openly opposed to our old definitions of state. While some corporations engage with states and believe in collaboration with public services, Google appears to define state as an irrelevant localized tax hindering their global ambitions.

A major setback to this definition came when an intruder was detected moving about Google’s state-less global flat network to perpetrate IP theft. Google believed China was to blame and went to the US government for services; only too late the heads of Google realized state-level protection without a state affiliation could prove impossible. Here is a perfect example of Google engineering anti-state theory full of dangerous presumptions that court security disaster


google-domination

A state is arguably made up of people, who govern through representation of their wants and needs. Google sees benefits in taking all the power and owing nothing in return, doing as they please because they know best. An engineer that studied political science might quickly realize that removing ability for people to represent themselves as a state, forced to bend at the whim of a corporation, would be a reversal in fortune rather than progress.

It is thus very exciting to think how today technology can impact definitions for group membership and the boundaries of power. Take a look at an old dichotomy between nomadic and pastoral groups. Some travel often, others stay put. Now we look around and see basic technology concepts like remote management and virtual environments forcing a rethink of who belongs to what and where they really are at any moment in time.

Perhaps you remember how Amazon wanted to provide cloud services to the US government under ITAR requirements?

Amazon Web Services’ GovCloud puts federal data behind remote lock and key

The question of maintaining “state” information was raised because ITAR protects US secrets by requiring only citizens have access. Rather than fix the inability for their cloud to provide security at the required level, a dedicated private datacenter was created where only US citizens had keys. Physical separation. A more forward-thinking solution would have been to develop encryption and identity management solutions that avoided breaking up the cloud, while still complying with requirements.

This problem came up again in reverse when Microsoft was told by the US government to hand over data in Ireland. Had Microsoft built a private-key solution, linked to the national identity of users, they could have demonstrated an actual lack of access to that data. Instead you find Microsoft boasting to the public that state boundaries have been erased, your data moves with you wherever you go, while telling the US government that data in Ireland can’t be accessed.

Being stateful is not just a firewall concern, it really has roots in political science.

An Ominous Test

Does the idea of someone moving freely scare you more or a person who digs in for the long haul and claims proof of boundary violations where you see none?

Whereas territory used to be an essential characteristic of a state, today we wonder what membership and presence means when someone can remain always connected, not to mention their ability to roam within overlapping groups. Boundaries may form around nomads who carry their farms with them (i.e. playing FarmVille) and of course pastoralism changes when it moves freely without losing control (i.e. remote management of a Data Center).

Technology is fundamentally altering the things we used to rely upon to manage power. On the one hand this is of course a good thing. Survivability is an aim of security, reducing the impact of disaster by making our data more easily spread around and preserved. On the other hand this great benefit also poses a challenge to security. Confidentiality is another aim of security, controlling the spread of data and limiting preservation to reduce exposure. If I can move 31TB/hr (recent estimate) to protect data from being destroyed it also becomes harder to stop giant ex-filtration of data.

From an information security professional’s view the two sides tend to be played out in different types of power and groups. We rarely, if ever, see a backup expert in the same room as a web application security expert. Yet really it’s a sort of complicated balance that rests on top of trust and relationships, the sort of thing political scientists love to study.

With that in mind, notice how Listverse plays to popular fears with a top ten “Ominous State-Sponsored Hacker Group” article. See if you now, thinking about a balance of power between groups, can find flaws in their representation of security threats.

It is a great study. Here are a few questions that may help:

  • Why would someone use “ominous” to qualify “state-sponsored” unless there also exist non-ominous state-sponsored hacker groups?
  • Are there ominous hacker groups that lack state support? If so, could they out-compete state-sponsored ones? Why or why not? Could there be multiple-affiliations, such that hackers could be sponsored across states or switch states without detection?
  • What is the political relationship, the power balance, between those with a target surface that gives them power (potentially running insecure systems) and those who can more efficiently generate power to point out flaws?
  • How do our own political views affect our definitions and what we study?

I would love to keep going yet I fear this would strain too far the TL;DR intent of the post. Hopefully I have helped introduce someone, anyone (hi mom!), to the increasing need for combined practice in political science and information security. This is a massive topic and perhaps if there is interest I will build a more formal presentation with greater detail and examples.

Updated 19 January: added “The Psychology of Intelligence Analysis” citation and excerpt.

Posted in Security.


The Beginning Wasn’t Full-Disclosure

An interesting personal account of vulnerability disclosure called “In the Beginning There was Full Disclosure” makes broad statements about the past.

In the beginning there was full disclosure, and there was only full disclosure, and we liked it.

I don’t know about you, but immediately my brain starts searching for a date. What year was this beginning?

No dates are given, only clues.

First clue, a reference to RFP.

So a guy named Rain Forest Puppy published the first Full Disclosure Policy promising to release vulnerabilities to vendors privately first but only so long as the vendors promised to fix things in a timely manner.

There may be earlier versions. The RFP document doesn’t have a date on it, but links suggest 2001. Lack of date seems a bit strange for a policy. I’ll settle on 2001 until another year pops up somewhere.

Second clue, vendors, meaning Microsoft

But vendors didn’t like this one bit and so Microsoft developed a policy on their own and called it Coordinated Disclosure.

This must have been after the Gates’ memo of 2002.

Both clues say the beginning was around 2000. That seems odd because software-based updates in computers trace back to 1968.

It also is odd to say the beginning was a Microsoft policy called Coordinated Disclosure. Microsoft says they released that in 2010.

Never mind 2010. Responsible disclosure was the first policy/concept at Microsoft because right after the Gates’ memo on security they mention it in 2003, discussing how Tavis Ormandy decided unilaterally to release a 0day on XP.

Thus all of the signals, as I dug through the remainder of the post, suggest vulnerability research beginning around 15 years ago. To be fair, the author gives a couple earlier references:

…a debate that has been raging in security circles for over a hundred years starting way back in the 1890s with the release of locksmithing information. An organization I was involved with, L0pht Heavy Industries, raised the debate again in the 1990’s as security researchers started finding vulnerabilities in products.

Yet these are too short a history (1890s wasn’t the first release of locksmith secrets) and not independent (L0pht takes credit for raising the debate around them) for my tastes.

Locksmith secrets are thousands of years old. Their disclosure follows. Pin-tumblers get called Egyptian locks because that’s where they are said to have originated; technically the Egyptians likely copied them out of Mesopotamia (today Iraq). Who believes Mesopotamia was unhappy their lock vulnerabilities were known? And that’s really only a tip of the iceberg for thousands of years disclosure history.

I hear L0pht taking credit again. Fair point. They raised a lot of awareness while many of us were locked in dungeons. They certainly marketed themselves well in the 1990s. No question there. Yet were they raising the debate or joining one already in progress?

To me the modern distributed systems debate raged much, much earlier. The 1968 Carterphone case, for example, ignited a whole generation seeking boundaries for “any lawful device” on public communication lines.

In 1992 Wietse Venema appeared quite adamant about the value of full disclosure, as if trying to argue it needs to happen. By 1993 he and Dan Farmer published the controversial paper “Improving the security of your site by breaking into it“.

They announced a vulnerability scanner that would be made public. It was the first of its kind. For me this was a turning point in the industry, trying to justify visibility in a formal paper and force open discussion of risk within an environment that mostly had preferred secret fixes. The public Emergency Response and Incident Advisory concepts still meant working with vendors on disclosure, which I will get to in a minute.

As a side-note the ISS founder claims to have written an earlier version of the same vulnerability scanner. Although possible, so far I have found nothing outside his own claims to back this up. SATAN has free and far wider recognition (i.e. USENIX paper) and also easily was found running in the early 1990s. I remember when ISS first announced in the mid 1990s, it appeared to be a commercial version of SATAN that did not even try to distinguish or back-date itself.

But I digress. Disclosure of vulnerabilities in 1992 felt very controversial. Those I found were very hush and the steeped ethical discussions of exposing weakness were clearly captured in Venema/Farmer paper. There definitely was still secrecy and not yet a full-disclosure climate.

Just to confirm I am not losing my memory, I ran a few searches on an old vulnerability disclosure list, the CIAC. Sure enough, right away I noticed secretive examples. January 4, 1990 Texas Instruments D3 Process Control System gives no details, only:

TI Vuln Disclosure

Also in January 1990, Apple has the same type of vulnerability notice.

Even more to the point, and speaking of SATAN, I also noticed HP using a pre-release notice. This confirms for me my memory isn’t far off; full disclosure was not a norm. HP issues a notice before the researcher made the vulnerabilities public.

HP SATAN

Vendors shifted how they respond not because a researcher released a vulnerability under pride of full disclosure, which a vendor had powerful legal and technical tools to dispute. Rather SATAN changed the economics of disclosure by making the discussion with a vendor about self-protection through awareness first-person and free.

Anyone could generate a new report, anywhere, anytime so the major vendors had to contemplate the value of responding to an overall “assessment” relative to other vendors.

Anyway, great thoughts on disclosure from the other blog, despite the difference on when and how our practices started. I am ancient in Internet years and perhaps more prone than most to dispute historic facts. Thus I encourage everyone to search early disclosures for further perspective on a “Beginning” and how things used to run.

Updates:

@ErrataRob points out SATAN was automating what CERT had already outed, the BUGTRAQ mailing list (started in 1993) was meant to crowd-source disclosures after CERT wasn’t doing it very well. Before CERT people traded vulns for a long time in secret. CERT made it harder, but it was BUGTRAQ that really shutdown trading because it was so easy to report.

@4Dgifts points out discussion of vulns on comp.unix.security USENET news started around 1984

@4Dgifts points out a December 1994 debate where the norm clearly was not full-disclosure. The author even suggests blackhats masquerade as whitehats to get early access to exploits.

All that aside, it is not my position to send out full disclosure, much as I might like to. What I sent to CERT was properly channeled through SCO’s CERT contact. CERT is a recognized and official carrier for such materials. 8LGM is, I don’t know, some former “black hat” types who are trying pretty hard to wear what looks like a “white hat” these days, but who can tell? If CERT believes in you then I assume you’ll be receiving a copy of my paper from them; if not, well, I know you’re smart enough to figure it out anyway.

[…]

Have a little patience. Let the fixed code propagate for a while. Give administrators in far off corners of the world a chance to hear about this and put up defenses. Also, let the gory details circulate via CERT for a while — just because SCO has issued fixes does not mean there aren’t other vendors whose code is still vulnerable. If you think this leaves out the freeware community, think again. The people who maintain the various login suites and other such publically available utilities should be in contact with CERT just as commercial vendors are; they should receive this information through the same relatively secure conduits. They should have a chance to examine their code and if necessary, distribute corrected binaries and/or sources before disclosure. (I realize that distributing fixed sources is very similar to disclosure, but it’s not quite the same as posting exploitation scripts).

Posted in History, Security.


US President Calls for Federal 30-day Breach Notice

Today the US moved closer to a federal consumer data breach notification requirement (healthcare has had a federal requirement since 2009 — see Eisenhower v Riverside for why healthcare is different from consumer).

PC World says a presentation to the Federal Trade Commission sets the stage for a Personal Data Notification & Protection Act (PDNPA).

U.S. President Barack Obama is expected to call Monday for new federal legislation requiring hacked private companies to report quickly the compromise of consumer data.

Every state in America has had a different approach to breach deadlines, typically led by California (starting in 2003 with SB1386 consumer breach notification), and more recently led by healthcare. This seems like an approach that has given the Feds time to reflect on what is working before they propose a single standard.

In 2008 California moved to a more aggressive 5-day notification requirement for healthcare breaches after a crackdown on UCLA executive management missteps in the infamous Farah Fawcett breaches (under Gov Schwarzenegger).

California this month (AB1755, effective January 2015, approved by the Governor September 2014) relaxed its healthcare breach rules from 5 to 15 days after reviewing 5 years of pushback on interpretations and fines.

For example, in April 2010, the CDPH issued a notice assessing the maximum $250,000 penalty against a hospital for failure to timely report a breach incident involving the theft of a laptop on January 11, 2010. The hospital had reported the incident to the CDPH on February 19, 2010, and notified affected patients on February 26, 2010. According to the CDPH, the hospital had “confirmed” the breach on February 1, 2010, when it completed its forensic analysis of the information on the laptop, and was therefore required to report the incident to affected patients and the CDPH no later than February 8, 2010—five (5) business days after “detecting” the breach. Thus, by reporting the incident on February 19, 2010, the hospital had failed to report the incident for eleven (11) days following the five (5) business day deadline. However, the hospital disputed the $250,000 penalty and later executed a settlement agreement with the CDPH under which it agreed to pay a total of $1,100 for failure to timely report the incident to the CDPH and affected patients. Although neither the CDPH nor the hospital commented on the settlement agreement, the CDPH reportedly acknowledged that the original $250,000 penalty was an error discovered during the appeal process, and that the correct calculation of the penalty amount should have been $100 per day multiplied by the number of days the hospital failed to report the incident to the CDPH for a total of $1,100.

It is obvious too long a timeline hurts consumers. Too short a timeline has been proven to force mistakes with covered entities rushing to conclusion then sinking time into recovering unjust fines and repairing reputation.

Another risk with too short timelines (and complaint you will hear from investigation companies) is that early-notification reduces good/secret investigations (e.g. criminals will erase tracks). This is a valid criticism, however it does not clearly outweigh benefits to victims of early notification.

First, a law-enforcement delay caveat is meant to address this concern. AB1755 allows a report to be submitted 15 days after the end of a law-enforcement imposed delay period, similar to caveats found in prior requirements to assist important investigations.

Second, we have not seen huge improvements in attribution/accuracy after extended investigation time, mostly because politics start to settle in. I am reminded of when Walmart in 2009 admitted to a 2005 breach. Apparently they used the time to prove they did not have to report credit card theft.

Third, value relative to the objective of protecting data from breach. Consider the 30-day Mandiant 2012 report for South Carolina Department of Revenue. It ultimately was unable to figure out who attacked (although they still hinted at China). It is doubtful any more time would have resolved that question. The AP has reported Mandiant charged $500K or higher and it also is doubtful many will find such high costs justified. Compare their investigation rate with the cost of improving victim protection:

Last month, officials said the Department of Revenue completed installing the new multi-password system, which cost about $12,000, and began the process of encrypting all sensitive data, a process that could take 90 days.

I submit to you that a reasonably short and focused investigation time saves money and protects consumers early. Delay for private investigation brings little benefit to those impacted. Fundamentally who attacked tends to be less important that how a breach happened; determining how takes a lot less time to investigate. As an investigator I always want to get to the who, yet I recognize this is not in the best interest of those suffering. So we see diminishing value in waiting, increased value in notification. Best to apply fast pressure and 30 days seems reasonable enough to allow investigations to reach conclusive and beneficial results.

Internationally Singapore has the shortest deadline I know of with just 48-hours. If anyone thinks keeping track of all the US state requirements has been confusing, working globally gets really interesting.

Update, Jan 13:

Brian Krebs blogs his concerns about the announcement:

Leaving aside the weighty question of federal preemption, I’d like to see a discussion here and elsewhere about a requirement which mandates that companies disclose how they got breached. Naturally, we wouldn’t expect companies to disclose publicly the specific technologies they’re using in a public breach document. Additionally, forensics firms called in to investigate aren’t always able to precisely pinpoint the cause or source of the breach.

First, federal preemption of state laws sounds worse than it probably is. Covered entities of course want more local control at first, to weigh in heavily on politicians and set the rule. Yet look at how AB1755 in California unfolded. The medical lobby tried to get the notification moved from 5 days to 60 days and ended up on 15. A Federal 30 day rule, even where preemptive, isn’t completely out of the blue.

Second, disclosure of “how” a breach happened is a separate issue. The payment industry is the most advanced in this area of regulation; they have a council that releases detailed methods privately in bulletins. The FBI also has private methods to notify entities of what to change. Even so, generic bulletins are often sufficient to be actionable. That is why I mentioned the South Carolina report earlier. Here you can see useful details are public despite their applicability:

Mandiant Breach Report on SCDR

Obama also today is expected to make a case in front of the NCCIC for better collaboration between private and government sectors (Press Release). This will be the forum for this separate issue. It reminds me of the 1980s debate about control of the Internet led by Rep Glickman and decided by President Reagan. The outcome was a new NIST and the awful CFAA. Let’s see if we can do better this time.

Letters From the Whitehouse:

Posted in History, Security.


The (Secret) History of the Banana Split

If there is a quintessential American dessert it is the banana split. Why? Although we can credit Persians and Arabs with invention of ice-cream (nice try China) the idea of putting lots of ice-cream on a split banana covered in everything you can find but the kitchen sink…surely that is pure American innovation.

After reading many food history pages and mulling their facts a bit I realized something important was out of place. There had to be more to this story than just Americans love big things — all the fixings — and one day someone put everything together. Why America? When?

I found myself digging around for more details and eventually ended up with this official explanation.

In 1904 in Latrobe, the first documented Banana Split was created by apprentice pharmacist David Strickler — sold here at the former Tassell Pharmacy. Bananas became widely available to Americans in the late 1800s. Strickler capitalized on this by cutting them lengthwise and serving them with ice cream. He is also credited with designing a boat-shaped glass dish for his treat. Served worldwide, the banana split has become a prevalent American dessert.

The phrase that catches my eye, almost lost among the other boring details, is that someone with an ingredient “widely available…capitalized”; capitalism appears to be the key to unlock this history.

Immigration and Trade

The first attribution goes to Italian immigrants who brought spumoni to America around the 1870s. This three flavor ice-cream often was in the three colors of their home country’s flag. No problem for Americans. The idea of a three flavor treat was taken and adapted to local favorites: chocolate, strawberry and vanilla. Ice-cream became more widely available by the 1880s and experimentation was inevitable as competition boomed. It obviously was a very popular food by the 1904 St. Louis World’s Fair, which infamously popularized eating it with cones.

In parallel, new trade developments emerged. Before the 1880s there were few bananas found in America. America bought around $250K of bananas in 1871. Only thirty years later the imports had jumped 2,460% to $6.4m and were in danger of becoming too common. Bananas being both easily sourced and yet still exotic made them ideal for experiments with ice-cream. The dramatic change in trade and availability was the result of the corporate conglomerate formed in 1899 called United Fruit Company. I’ll explain more about them in a bit.

So what we’re talking about, at this point, is Persian/Arab ice-cream modified and brought by Italian immigrants to America, further modified in America and then dropped on a newly marketed corporate banana. All the fixings on top of a banana-split make perfect sense if you put yourself in the shoes of someone working in the soda/pharmacy business of 1904.

Bananas and Pineapples Were The New Thing

Imagine you’re in a drug-store and supposed to be offering something amazing or exotic to draw in customers. People could go to any drugstore. You pull out the hot new banana fruit, add the three most-popular flavors (impressive yet not completely unfamiliar) and then dump all the sauces you’ve got on top. You charge double the cost of any other dessert. Should you even add pineapple on top? Of course! You just started getting pineapple in a new promotion by the Dole corporation:

In 1899 James Dole arrived in Hawaii with $1000 in his pocket, a Harvard degree in business and horticulture and a love of farming. He began by growing pineapples. After harvesting the world’s sweetest, juiciest pineapples, he started shipping them back to mainland USA.

I have mentioned before on this blog how the US annexed Hawaii by sending in the Marines. Interesting timing, no? Food historians rarely bother to talk about this side of the equation, so indulge me for a moment. I sense a need for this story to be told.

The arrival of Mr. Dole to Hawaii in 1899, and a resulting sudden widespread availability of pineapples in drugstores for banana splits, is a dark chapter in American politics.

1890 American Protectionism and Hawaiian Independence

US Republican Congress in 1890 had approved a McKinley Tariff, which raised the cost of imports 40-50%. Although it left an exception for sugar it still removed Hawaii’s “favored status” and rewarded domestic production.

Only two years after the Tariff the sugar exports of Hawaii to America dropped 40% and threw their economy into shock. Sugar plantations run by white American businessmen quickly cooked up ideas to reinstate profits; removing Hawaii’s independence was their favored plan.

At the same time these businessmen conspired to remove Hawaiian independence, Queen Lili`uokalani took Hawaii’s throne and indicated she would reduce foreign interference, drafting a new constitution.

The two sides were headed for disaster in 1892 despite US government shifting dramatically to Democratic control (leading to the repeal of the McKinley Tariff in 1894). As Hawaii hinted towards more national control the foreign businessmen in Hawaii increasingly called on America for annexation.

An “uprising” in early 1893 forced the Queen to abdicate power to a government supported by the sugar growers. US Marines stormed the island under the premise of protecting American businessmen and this new government.

The nation’s fate seemed sealed, however it actually remained uncertain as a newly elected US President was openly opposed to imperialism and annexation. He even spoke of support for the Queen. Congressional pressure mounted and by 1897 the President seemed less likely to oppose annexation. Finally in 1898, given the war with Spain, Hawaii became of strategic importance and abruptly lost its independence definitively.

Few Americans I speak with realize the US basically sent Marines to annex Hawaii because increased profits for American plantation owners and lower cost of sugar for Americans, then sealed the annexation with war.

Total Control Over Fruit Sources

Anyway, remember Mr. Dole arriving in Hawaii in 1899 ready to start shipments of cheap pineapples? His arrival and success was a function of the annexation of an independent state; the creation of a pro-American puppet government to facilitate business and military interests. This is why drugstores in 1904 suddenly had ready access to pineapple.

And back to the question of bananas, the story is quite similar. The United Fruit Company quickly was able to establish US control over plantations in Columbia, Costa Rica, Cuba, Jamaica, Nicaragua, Panama, Santo Dominica and Guatemala.

Nearly half of Guatemala fell under control of the US conglomerate corporation, apparently, and yet no taxes had to be paid; telephone communications as well as railways, ports and ships all were owned by United Fruit Company. The massive level of US control initially was portrayed as an investment and benefit to locals, although hindsight has revealed another explanation.

“As for repressive regimes, they were United Fruit’s best friends, with coups d’état among its specialties,” Chapman writes. “United Fruit had possibly launched more exercises in ‘regime change’ on the banana’s behalf than had even been carried out in the name of oil.” […] “Guatemala was chosen as the site for the company’s earliest development activities,” a former United Fruit executive once explained, “because at the time we entered Central America, Guatemala’s government was the region’s weakest, most corrupt and most pliable.”

Thus the term “banana republic” was born.

And while this phrase was meant to be pejorative and negative, it gladly was adopted in the 1980s by a couple Americans who traveled the world to blatantly steal clothing designs and resell as a “discovery”. This success at appropriation of ideas led to the big brand stores selling inexpensive clothes most people know today, found in most malls. The irony of the name surely has been lost on everyone.

So too with the desert. In other words the banana-split is a by-product or modern representation of America’s imperialist expansion and corporate-led brutal subjugation of freedoms in foreign nations, during the early 1900s.

Now you know the secret behind widespread availability of inexpensive ingredients that made a famous and iconic dessert possible.

Posted in Food, History, Security.


Linguistics as a Tool for Cyber Attack Attribution

My mother and I from 2006 to 2010 presented a linguistic analysis of the Advanced Fee Fraud (419 Scam).

One of the key findings we revealed (also explained in other blog posts and our 2006 paper) is that intelligence does not prevent someone from being vulnerable to simple linguistic attacks. In other words, highly successful and intelligent analysts have a predictable blind-spot that leads them to mistakes in attribution.

The title of the talk was usually “There’s No Patch for Social Engineering” because I focused on helping users avoid being lured into phishing scams and fraud. We had very little press attention and in retrospect instead of raising awareness in talks and papers alone (peer review model) we perhaps should have open-sourced a linguistic engine for detecting fraud. I suppose it depends on how we measure impact.

Despite lack of journalist interest, we received a lot of positive feedback from attendees: investigators, researchers and analysts. That felt like success. After presenting at the High-Tech Crimes Investigation Association (HTCIA) for example I had several ex-law enforcement and intelligence officers thank me profusely for explaining in detail and with data how intelligence can actually make someone more prone to misattribution, to fall victim to bias-laced attacks. They suggested we go inside agencies to train staff behind closed doors.

Recently the significance of the work has taken a new turn; I see a spike in interest on my blog post from 2012 coupled with news that linguistics are being used to analyze Sony attack attribution. Ironically the story is by a “journalist” at the NYT who blocked me on Twitter. I’m told by friends I was blocked because once I used a Modified Tweet (MT) to parody her headline.

Since long before the beginning of the Sony attack I have tried to raise the importance of linguistic analysis for attribution, as I tweeted here.

NSA, @Mandiant and @FireEye analysts say no English or bad grammar means u not no American

And then at the start of the Sony news on December 8, I tweeted a slide from our 2010 presentation. Also recently I tweeted

good analysis causes anti-herding behavior: “separates social biases introduced by prior ratings from true value”

Tweets unfortunately are disjointed and get far less audience than my blog posts so perhaps it is time to return to this topic here instead? I thus am posting the full presentation again:

RSAC_SF_2010_HT1-106_Ottenheimer.pdf

Look forward to discussing this topic further, as it definitely needs more attention in the information security community. Kudos to Jeffrey Carr for pursuing the topic and invitation to participate.

Updated to add: Perhaps it also would be appropriate here to mention my mother’s book called The Anthropology of Language: An Introduction to Linguistic Anthropology.anthropology of language

Ottenheimer’s authoritative yet approachable introduction to the field’s methodology, skills, techniques, tools, and applications emphasizes the kinds of questions that anthropologists ask about language and the kinds of questions that intrigue students. The text brings together the key areas of linguistic anthropology, addressing issues of power, race, gender, and class throughout. Further stressing the everyday relevance of the text material, Ottenheimer includes “In the Field” vignettes that draw you in to the chapter material via stories culled from her own and others’ experiences, as well as “Doing Linguistic Anthropology” and “Cross-Language Miscommunication” features that describe real-life applications of text concepts.

Posted in Security.


How the NSA Can Tell if You Are a Foreigner

For several years I have tried to speak openly about why I find it disappointing that analysts rely heavily (sometimes exclusively) on language to determine who is a foreigner.

Back in 2011 I criticized McAfee for their rather awful analysis of language.

They are making some funny and highly improbable assumptions: … The attackers used Chinese language attack tools, therefore they must be Chinese. This is a reverse language bias that brings back memories of L0phtCrack. It only ran in English.

Here’s the sort of information I have presented most recently for people to consider:

You see above the analysts tell a reporter that presence of a Chinese language pack is the clue to Chinese design and operation of attacks on Russia. Then further investigation revealed the source actually was Korea. Major error, no? It seems to be reported as only an “oops” instead of a WTF.

At a recent digital forensics and incident response (DFIR) meeting I pointed out that the switch from Chinese to Korean origin of attacks on Russia of course was a huge shift in attribution, one with potential connections to the US.

This did not sit well with at least one researcher in the audience. “What proof do you have there are any connections from Korea to the US” they yelled out. I assumed they were facetiously trying to see if I had evidence of an English language pack to prove my point.

In retrospect they may actually have been seriously asking me to offer clues why Korean systems attacking Russia might be linked to America. I regret not taking the time to explain what clues more significant than a language pack tend to look like. Cue old history lesson slides…but I digress.

Here’s another slide from the same talk I gave about attribution and language. I point to census data with the number and location of Chinese speakers in America, and most popular languages used on the Internet.

Unlike McAfee, mentioned above, FireEye and Mandiant have continued to ignore the obvious and point to Chinese language as proof of someone being foreign.

Consider for a moment that the infamous APT1 report suggests that language proves nothing at all. Here is page 5:

Unit 61398 requires its personnel to be…proficient in the English language

Thus proving APT1 are English-speaking and therefore not foreigners? No, wait, I mean proving that APT1 are very dangerous because you can never trust anyone required to be proficient in English.

But seriously, Mandiant sets this out presumably to establish two things.

First, “requires to be proficient” is a subtle way to say Chinese never will do better than “proficient” (non-native) because, foreigners.

Second, the Chinese target English-speaking victims (“Only two victims appear to operate using a language other than English…we believe that the two non-English speaking victims are anomalies”). Why else would the Chinese learn English except to be extremely targeted in their attacks — narrowing their focus to basically everywhere people speak English. Extremely targeted.

And then on page 6 of APT1 we see supposed proof from Mandiant of something else very important. Use of a Chinese keyboard layout:

…the APT1 operator’s keyboard layout setting was “Chinese (Simplified) – US Keyboard”

On page 41 (suspense!) they explain why this matters so much:

…Simplified Chinese keyboard layout settings on APT1’s attack systems, betrays the true location and language of the operators

Mandiant gets so confident in where someone is from based on assessing language they even try to convince the reader that Americans do not make grammar errors. Errors in English (failed attempts at proficiency) prove they are dealing with a foreigner.

Their own digital weapons betray the fact that they were programmed by people whose first language is not English. Here are some examples of grammatically incorrect phrases that have made it into APT1’s tools

It is hard to believe this is not meant as a joke. There is a complete lack of linguistic analysis, for example, just a strange assertion about proficiency. In our 2010 RSAC presentation on the linguistics of threats we give analysis of phrases and show how syntax and spellings can be useful to understand origins. I can only imagine what people would have said if we tried to argue “Bad Grammar Means English Ain’t Your First Language”.

Of course I am not saying Mandiant or others are wrong to have suspicion of Chinese connections when they find some Chinese language. Despite analysts wearing clothes with Chinese language tags and using computers that probably have Chinese language print there may be some actual connections worth investigating further.

My point is that the analysis offered to support conclusions has been incredibly weak, almost to the point of being a huge distraction from the quality in the rest of the reports. It makes serious work look absurd when someone over-emphasizes language spoken as proof of geographic location.

Now, in some strange twist of “I told you so”, the Twittersphere has come alive with condemnation of an NSA analyst for relying to heavily on language.

Thank you to Chris and Halvar and everyone else for pointing out how awful it is when the NSA does this kind of thinking; please also notice how often it happens elsewhere.

More people need to speak out against this generally in the security community on a more regular basis. It really is far too common in far too many threat reports to be treated as unique or surprising when the NSA does it, no?

Posted in History, Security.


In Defense of Microsoft’s Active Defense Against No-IP

The Microsoft take-down of malicious DNS has stirred a healthy debate. This is the sort of active defense dilemma we have been presenting on for years, trying to gather people to discuss. Now it seems to be of interest thanks to a court order authorizing a defense attempt against malware: take-over and scrubbing of name resolution.

Over the past several days I have been in lengthy discussions with numerous lawyers on mailing lists about legal and technical details to the complaint and action. Some have asked me to put my thoughts into a blog, so here you have it.

This dialogue with both lawyers and security experts has crystallized for me that a community trying to increase freedom on the Internet should be, and some already are, supportive of elements in Microsoft’s action.

There is an opportunity here for guiding courts to course-correct and increase the effectiveness of individuals or even groups using active defense to reduce harm with minimal impact to freedoms. One exception in the security community stands out; some said the organization implicated in harm was sufficiently responsive before Microsoft action and should have been left alone to continue dispensing at current rates. Hold that thought.

Throughout my entire career, just to put this in some perspective, I have been an outspoken critic of Microsoft. My site name, flyingpenguin, started in the mid-1990s as homage to Linux and in belief that it would ultimately bypass Microsoft. This was in part due to coming from a VMS and Unix background and then being asked in my first professional job to lock-down and defend Windows NT 3.51 from compromise. It was hairy bad.

Anyone remember Bill Gates saying NT would ship but security can wait? Or remember Microsoft’s founder telling the UNIX community they have to explain to him how to make a billion dollars with security? My 2011 Dr. Stuxlove presentation started with some of those stories.

Ok, a full confession: I was offered PCs with Microsoft Word at home but I preferred WordPerfect and switched to Apple as soon as I could (1990, although I stopped using Apple in 2010). Despite preferences, I also accepted my fate as a security professional, which has meant 20 years spent working on ways to protect Microsoft customers.

To me, for as long as I can remember, Microsoft really seemed like a law firm started with lawyerish intentions; it just happened to also write and sell software. I might have further hardened these views due to years I spent watching legal trickery used like cannons to sink all the competing software boats; obvious hostility and attempts to knock holes into hobbyist and free software movements.

That legally-led-and-defended direction against competition didn’t last forever for various reasons outside the scope of this post. But Microsoft gradually was forced by external factors to realign their definition of malice away from competitors and hobbyists and towards clearly malicious software as well as some glaring flaws in their accountability department. The change started around 2000. By 2005 I was invited inside for a meeting where I was told “we now have five people full-time on security”. Five, in the entire company; don’t know if that was accurate but apparently 1/5 of the Microsoft security group saw me almost fall out of my chair.

Today, despite the thick jade-colored glasses you might think I wear when looking at Microsoft, I can see a different company taking very different approaches to security. Microsoft is *cough*, I can’t believe I have to say this, emerging as a leader and committed to improving safety in some balanced and thoughtful ways.

I was surprised to be invited to another internal meeting in 2013 but was even more surprised to see how thoroughly a security message is working its way through the organization. Don’t count me a full supporter yet, however. I’m still a skeptic, but I have to admit some noticeable changes happening that I wanted to see. Either they’re really getting it or my bullshit detector is failing. Of course both are possible but I believe it is the former.

Microsoft in the past few months appears to have rotated their massive legal cannons to fire volleys of legal briefs upon those they find willingly causing catastrophic harm to Microsoft-made vessels. Am I using the “letter of marque” analogy too liberally here? Microsoft is asking the legal authority for permission to fire, opening their plans for assessment by that authority, and claiming they will act responsibly within limits defined by the authority. We might actually want this to happen more. After all, if Microsoft does not try to actively help in the defense of their users from harm, who should we turn to and ask for a better job with less risk?

Let me try with another analogy. This one might resonate closer to home (pun not intended). Microsoft builds houses and people move in thinking it will be safe. Nearly 24 million people residing in these homes are soon reported sick or dead, causing huge cost and outages. Several independent reports confirm publicly that a service provider is involved in harm. And this provider has been taking little or no significant action to block distribution of harm despite overwhelming evidence; confirmed impact to at least 8 million people. The service provider not only shows no response to public reports of harm, the harm continues to rise.

Microsoft, (now) showing concern about the safety of its homes, tells the court that numerous independent investigations show over 90% harm comes from one service provider. Microsoft asks the court for authority to act on this because, well, logic. They suggest they are in the best position to lead a takeover to continue services without interruption while filtering out harm to tens of millions of people that the court wants to protect. The courts grant this limited authority for the purpose of efficiently cleaning harm.

Unfortunately, this proposal fails. Microsoft’s service has been oversold (surprise) and unable to perform at a level anticipated. Moreover, it turns out to be difficult to prove whether only those causing harm are inconvenienced or also others using the service.

Critics argue as many as 4 million might be inconvenienced (without qualifying as malware or not); but those critics do not measure benefits, or put in perspective of the potentially 24 million harmed over the past year. Critics also argue insufficient notice was given to the service provider before Microsoft moved services to clean them. Remember how I told you to keep in mind that some people said the provider was very responsive to reports of malware? I believe this responsiveness argument backfires on critics of Microsoft. Here’s why:

24 million (worst case) or even 8 million (best case) victims in a year, reported by multiple sources, makes it hard to argue the provider was “responsive” to the issue at hand. They may have been responsive for some particular request, but what did they do about the 24 million problem?

Technically people are right that formal notice is required and necessary. Many in the security community point out however that the provider was a known source of harm being *regularly* notified, which tends to contradict those in the community saying they felt responsiveness adequate for a narrow band of their request. The context often missing from critics of Microsoft is whether reasonable action had been taken in response to public notice about problem in the millions.

A basic review of those who claim responsiveness sufficient suggests the business of remediation and profit from insufficient responses to malware may color their judgment. We can probably balance the question of responsiveness by asking those assessing damage at the full scale of harm whether response was adequate. The courts were maybe considering notification from that angle?

The take-over clearly brought to light some mistakes. I remain skeptical about the action taken, as I said, but I recognize Microsoft for doing what appears to be the right thing. Microsoft obviously needs to be held accountable, just like we would want the DNS service provider to be held more accountable for harm. In fact, it will be interesting to see how harm from the take-over will be demonstrated or documented, as that could actually help Microsoft make their next complaint.

Lessons from this event will help inform how to make improvements for future active defense and set standards of care or definitions of reasonableness. It really kind of annoys me that Microsoft was not able to prove successful their solution for DNS scrubbing. Had they done better engineering or had some proof of service levels, we would be having a completely different discussion right now.

Instead I hear people saying Microsoft was a vigilante (acting without proper authority). That is incorrect. Microsoft asked and was granted authority. Those saying only the government can be an enforcement agent either do not understand public-private relationships or have not thought about the technical challenges (let alone social) of asking the US government to run safe DNS services. Talk about a scary proposition.

Those saying companies are getting a green light to takeover others also are incorrect. Microsoft put together a detailed and compelling complaint with a systemic fix recommendation to reduce a massive amount of harm, linked to multiple current independent sources of research and verification. A green light is very different from the complicated hurdles overcome by Microsoft’s legal team. As in history, their legal prowess unfortunately outdid their engineering.

What this really boils down to is some interesting ethics questions. People are asking for a more trusted Internet, but how do we get there unless someone closest to the harm takes responsibility and proposes solutions within a legal framework (oversight)? Solutions to these types of “wicked problems” require forward thinking in partnerships, as several of us from different industries explained in a recent panel presentation.

So let’s talk about whether Microsoft should be allowed to claim safety of their consumers and users fits within a definition of self-defense. I’m obviously side-stepping the part where Microsoft said they were suffering reputation harm from malware. You can probably tell how I might respond to that claim.

What I really want the community to decide is whether Microsoft can be authorized to perform actions of “self-defense”. They are not policing the Internet. They seem to be asking for the right to block harm to their users in the most efficient, least intrusive way. Perhaps we should ask instead can Microsoft, if we don’t accept a self-defense argument, be authorized to defend consumers and users of theirs who request protection?

It has been very interesting to hear what people think. I really have been doing my best to engage the legal community these past few days and measure as broad a reaction as possible. I am writing this more publicly in the hope to cut through some of the noise about what the security community thinks and point out that even I feel Microsoft is not being fairly credited for reasonable efforts to find cures to some of the problems they helped create.

Posted in Security.


2014 Things Expo: New Security Models for the Internet of Things

Thank you to my interactive audience at the 2014 Things Expo in NYC. Really appreciate everyone attending my “New Security Models for the Internet of Things” session to close out the conference. Excellent feedback and I am pleased to see such interest in security!

Please find a PDF version here.

Posted in Security.


Cyber-Colonialism and Beliefs About Global Development

Full disclosure: I spent my undergraduate and graduate degree time researching the ethics of intervention with a focus on the Horn of Africa. One of the most difficult questions to answer was how to define colonialism. Take Ethiopia, for example. It was never colonized and yet the British invaded, occupied and controlled it from 1940-1943 (the topic of my MSc thesis at LSE).

I’m not saying I am an expert on colonialism. I’m saying after many years of research including spending a year reading original papers from the 1940s British Colonial office and meeting with ex-colonial officers, I have a really good sense of how hard it is to become an expert on colonialism.

Since then, every so often, I hear someone in the tech community coming up with a theory about colonialism. I do my best to dissuade them from going down that path. Here came another opportunity on Twitter from Zooko:

This short post instantly changed my beliefs about global development. “The Dawn of Cyber-Colonialism” by @GDanezis

If nothing else, I would like to encourage Zooko and the author of “dawn of Cyber-Colonialism” to back away from simplistic invocations of colonialism and choose a different discourse to make their point.

Maybe I should start by pointing out an irony often found in the anti-colonial argument. The usual worry about “are we headed towards colonialism” is tied to some rather unrealistic assumptions. It is like a thinly-veiled way for someone to think out loud: “our technology is so superior to these poor savage countries, and they have no hope without us, we must be careful to not colonize them with it”.

A lack of self-awareness in commercial views is an ancient factor. John Stuart Mill, for example in the 1860s, used to opine that only through a commercial influence would any individual realize true freedom and self-governance; yet he feared colonialists could spoil everything through not restraining or developing beyond their own self-interests. His worry was specifically that colonizers did not understand local needs, did not have sympathy, did not remain impartial in questions of justice, and would always think of their own profits before development. (Considerations on Representative Government)

I will leave the irony of the colonialists’ colonialism lament at this point, rather than digging into what motivates someone’s concern about those “less-developed” people and how the “most-fortunate” will define best interests of the “less-fortunate”.

People tend to get offended when you point out they may be the ones with colonialist bias and tendencies, rather than those they aim to criticize for being engaged in an unsavory form of commerce. So rather than delve into the odd assumptions taken among those who worry, instead I will explore the framework and term of “colonialism” itself.

Everyone today hates, or should hate the core concepts of colonialism because the concept has been boiled down so much to be little more than an evil relic of history.

A tempting technique in discourse is to create a negative association. Want people to dislike something? Just broadly call it something they already should dislike, such as colonialism. Yuck. Cyber-colonialism, future yuck.

However, using an association to colonialism actually is not as easy as one might think. A simplified definition of colonialism tends to be quite hard to get anyone to agree upon. The subjugation of a group by another group through integrated domination might be a good way to start the definition. And just look at all the big words in that sentence.

More than occupation, more than unfair control or any deals gone wrong, colonialism is tricky to pin down because of elements of what is known as “colonus” and measuring success as agrarian rather than a nomad.

Perhaps a reverse view helps clarify. The exit-barrier to colonialism is not just a change in political and economic controls. Successful colonies are characterized by active infiltration by people who settle in and through persistent integration displace and deprive control of anyone they find in order to “gain” from their acquired assets. It is an act of displacement coupled with active and forced reprogramming, early explorations of corporations for profit.

Removing something colonus, therefore, is unlike removing elements performing routine work along commercial lines. Even if you fire bad processors/workers colonialism would remain. Instead removal means to untangle and reverse steps that were meant to output a new commercially-driven “civilization”. De-occupation is comparatively easy. Removing control, cancelling a deal or a contract, is also easy. De-colonization is hard.

If I must put it in terms of IT, hardware that actively tried to take control of my daily life and integrate into my processes that I have while reducing my control of direction is what we’re talking about. Not just a bad chip, it is an entire business process attack. It would be like someone infecting our storage devices with bitcoin mining code that they not only profit from but also use to permanently settle in our environment and prevent us from having a final say about our own destiny. Reformulating business processes is messy, far worse than a bug in any chip.

My study in undergraduate and graduate school really tried to make sense of the end of colonialism and the role of foreign influence in national liberation movements through the 1960s. This was not a study of a patching mechanism or a new source of materials. I never found, not even in the extensive work of European philosophers, a simple way to describe the very many facets of danger from always uninvited (or even sometimes invited) guests who were able to invade and completely run large complex organizations.

Perhaps now you can see the trouble with colonialism definitions.

Now take a look at this odd paraphrase of the Oxford Dictionary (presumably because the author is from the UK), used to setup the blog post called “The dawn of Cyber-Colonialism“:

The policy or practice of acquiring full or partial political control over another country’s cyber-space, occupying it with technologies or components serving foreign interests, and exploiting it economically.

Pardon my French but this is complete bullshit. Such a definition at face value is far too broad to be useful. Partial control over another country by occupying it with stuff to serve foreign interest and exploiting it sounds like what most would call imperialism at worst, commerce at best. I mean nothing in that definition says “another country” is harmed. Harm seems essential. Subjugation is harmful. That definition also doesn’t say anything about being opposed to control or occupation, let alone exploitation.

I’m not going to blow apart the definition bit-by-bit as much as I am tempted. It fails across multiple levels and I would love to destroy each.

Instead I will just point out that such a horrible definition would result in Ethiopia having to say it was colonized because of British 1940 intervention to remove Axis invaders and put Haile Selassie back into power. Simple test. That definition fails.

Let me cut right to the chase. As I mentioned at the start, those arguing that we are entering an era of cyber-colonialism should think carefully whether they really want to wade into the mess of defining colonialism. I advise everyone to steer clear and choose other pejorative and scary language to make a point.

Actually, I encourage them to tell us how and why technology commerce is bad in precise technical details. It seems lazy for people to build false connections and use association games to create negative feeling and resentment instead of being detailed and transparent in their research and writing.

On that note, I also want to comment on some of the technical points found in the blog claiming to see a dawn of colonialism:

What is truly at stake is whether a small number of technologically-advanced countries, including the US and the UK, but also others with a domestic technology industry, should be in a position to absolutely dominate the “cyber-space” of smaller nations.

I agree in general there is a concern with dominance, but this representation is far too simplistic. It assumes the playing field is made up of countries (presumably UK is mentioned because the blog author is from the UK), rather than what really is a mix of many associations, groups and power brokers. Google, for example, was famous in 2011 for boasting it had no need for any government to exist anymore. This widely discussed power hubris directly contradicts any thesis that subjugation or domination come purely from the state apparatus.

Consider a small number of technologically-advanced companies. Google and Amazon are in a position to absolutely dominate the cyber-space of smaller nations. This would seem as legitimate a concern as past imperialist actions. We could see the term “Banana Republic” replaced as countries become a “Search Republic”.

It’s a relationship fairly easy to contemplate because we already see evidence of it. Google’s chairman told the press he was proud of “Search Republic” policies and completely self-interested commerce (the kind Mill warned about in 1861): he said “It’s called capitalism

Given the mounting evidence of commercial and political threat to nations from Google, what does cyber-colonialism really look like in the near, or even far-off, future?

Back to the blog claiming to see a dawn of colonialism, here’s a contentious prediction of what cyber-colonialism will look like:

If the manager decides to go with modern internationally sourced computerized system, it is impossible to guarantee that they will operate against the will of the source nation. The manufactured low security standards (or deliberate back doors) pretty much guarantee that the signaling system will be susceptible to hacking, ultimately placing it under the control of technologically advanced nations. In brief, this choice is equivalent to surrendering the control of this critical infrastructure, on which both the economic well-being of the nation and its military capacity relies, to foreign power(s).

The blog author, George Danezis, apparently has no experience with managing risk in critical infrastructure or with auditing critical infrastructure operations so I’ll try to put this in a more tangible and real context:

Recently on a job in Alaska I was riding a state-of-the art train. It had enough power in one engine to run an entire American city. Perhaps I will post photos here, because the conductor opened the control panels and let me see all of the great improvements in rail technology.

The reason he could let me in and show me everything was because the entire critical infrastructure was shutdown. I was told this happened often. As the central switching system had a glitch, which was more often than you might imagine, all the trains everywhere were stopped. After touring the engine, I stepped off the train and up into a diesel truck driven by a rail mechanic. His beard was as long as a summer day in Anchorage and he assured me trains have to be stopped due to computer failure all the time.

I was driven back to my hotel because no trains would run again until the next day. No trains. In all of Alaska. America. So while we opine about colonial exploitation of trains, let’s talk about real reliability issues today and how chips with backdoors really stack up. Someone sitting at the keyboard can worry about resilience of modern chips all they want but it needs to be linked to experience with “modern internationally sourced computerized system” used to run critical infrastructure. I have audited critical infrastructure environments since 1997 and let me tell you they have a very unique and particular risk management model that would probably surprise most people on the outside.

Risk is something rarely understood from an outside perspective unless time is taken to explore actual faults in a big picture environments and the study of actual events happening now and in the past. In other words you can’t do a very good job auditing without spending time doing the audit, on the inside.

A manager going with a modern internationally sourced computerized system is (a) subject to a wide spectrum of factors of great significance (e.g. dust, profit, heat, water, parts availability, supply chains), and (b) worried about presence of backdoors for the opposite reason you might think ; they represent hope for support and help during critical failures. I’ll say it again, they WANT backdoors.

It reminds me of a major backdoor into a huge international technology company’s flagship product. The door suggested potential for access to sensitive information. I found it, I reported it. Instead of alarm by this company I was repeatedly assured I had stumbled upon a “service” highly desirable to customers who did not have the resources or want to troubleshoot critical failures. I couldn’t believe it. But as the saying goes: one person’s bug is another person’s feature.

To make this absolutely clear, there is a book called “Back Door Java” by Newberry that I highly recommend people read if they think computer chips might be riddled with backdoors. It details how the culture of Indonesia celebrates the backdoor as an integral element of progress and resilience in daily lives.

Cooking and gossip are done through a network of access to everyone’s kitchen, in the back of a house, connected by alley. Service is done through back, not front, paths of shared interests.

This is not that peculiar when you think about American businesses that hide critical services in alleys and loading docks away from their main entrances. A hotel guest in America might say they don’t want any backdoors until they realize they won’t be getting clean sheets or even soap and toilet-paper. The backdoor is not inherently evil and may actually be essential. The question is whether abuse can be detected or prevented.

Dominance and control is quite complex when you really look at the relationships of groups and individuals engaged in access paths that are overt and covert.

So back to the paragraph we started with, I would say a manager is not surrendering control in the way some might think when access is granted, even if access is greater than what was initially negotiated or openly/outwardly recognized.

With that all in mind, re-consider the subsequent colonization arguments given by “The dawn of Cyber-Colonialism

Not opting for computerized technologies is also a difficult choice to make, akin to not having a mobile phone in the 21st century. First, it is increasingly difficult to source older hardware, and the low demand increases its cost. Without computers and modern network communications is it also impossible to benefit from their productivity benefits. This in turn reduces the competitiveness of the small nation infrastructure in an international market; freight and passengers are likely to choose other means of transport, and shareholders will disinvest. The financial times will write about “low productivity of labor” and a few years down the line a new manager will be appointed to select option 1, against a backdrop of an IMF rescue package.

That paragraph has an obvious false choice fallacy. The opposite of granting access (prior paragraph) would be not granting access. Instead we’re being asked in this paragraph to believe the only other choice is lack of technology.

Does anyone believe it increasingly is difficult to source older hardware? We are given no reason. I’ll give you two reasons how old hardware could be increasingly easy to source: reduced friction and increased privacy.

About 20% of people keep their old device because it’s easier than selling it. Another 20% keep their device because privacy concerns. That’s 40% of old hardware sitting and ready to be used, if only we could erase the data securely and make it easy to exchange for money. SellCell.com (trying to solve one of the problems) claims the source of older cellphone hardware in America alone now is about $47billion worth.

And who believes that low demand increases cost? What kind of economic theory is this?

Scarcity increases cost, but we do not have evidence of scarcity. We have the opposite. For example, there is no demand for the box of Blackberry phones sitting on my desk.

Are you willing to pay me more for a Blackberry because low demand?

Even more suspect is a statement that without computers and modern network communications it is impossible for a country to benefit. Having given us a false choice fallacy (either have the latest technology or nothing at all) everyone in the world who doesn’t buy technology is doomed to fail and devalue their economy?

Apply this to ANY environment and it should be abundantly clear why this is not the way the world works. New technology is embraced slowly, cautiously (relative terms) versus known good technology that has proven itself useful. Technology is bought over time with varying degrees of being “advanced”.

To further complicate the choice, some supply chains have a really long tail due to the nature of a device achieving a timeless status and generating localized innovation with endless supplies (e.g. the infamous AK-47, classic cars).

To make this point clearer, just tour the effects of telecommunications providers in countries like South Africa, Brazil, India, Mexico, Kenya and Pakistan. I’ve written about this before on many levels and visited some of them.

I would not say it is the latest or greatest tech, but tech available, which builds economies by enabling disenfranchised groups to create commerce and increase wealth. When a customer tells me they can only get 28.8K modem speeds I do not laugh at them or pity them. I look for solutions that integrate with slow links for incremental gains in resilience, transparency and privacy. When I’m told 250ms latency is a norm it’s the same thing, I’m building solutions to integrate and provide incremental gains. It’s never all-or-nothing.

A micro-loan robot in India that goes into rough neighborhoods to dispense cash, for example, is a new concept based on relatively simple supplies that has a dramatic impact. Groups in a Kenyan village share a single cell-phone and manage it similarly to the old British phone booth. There are so many more examples, none of which break down in simple terms of the amazing US government versus technologically-poor countries left vulnerable.

And back to the blog paragraph we started with, my guess is the Financial Times will write about “productivity of labor” if we focus on real risk, and a few years down the line new managers will be emerging in more places than ever.

Now let’s look at the conclusion given by “The dawn of Cyber-Colonialism

Maintaining the ability of western signals intelligence agencies to perform foreign pervasive surveillance, requires total control over other nations’ technology, not just the content of their communication. This is the context of the rise of design backdoors, hardware trojans, and tailored access operations.

I don’t know why we should believe anything in this paragraph. Total control of technology is not necessary to maintain the ability of intelligence. That defies common sense. Total control is not necessary to have intelligence be highly effective, nor does it mean intelligence will be better than having partial or incomplete control (as explained best by David Hume).

My guess is that paragraph was written with those terms because they have a particular ring to them, meant to evoke a reaction rather than explain a reality or demonstrate proof.

Total control sounds bad. Foreign pervasive surveillance sounds bad. Design backdoors, Trojan horses and tailored access (opposite of total control) sound bad. It all sounds so scary and bad, we should worry about them.

But back to the point, even if we worry because such scary words are being thrown at us about how technology may be tangled into a web of international commerce and political purpose, nothing in that blog on “cyber-colonialism” really comes even close to qualify as colonialism.

Posted in History, Security.