Category Archives: History

Crowdstrike or Clownstrike? A Political Science TL;DR for InfoSec

Scotland’s national animal is a unicorn. What does that tell you?

More and more often I see those experienced in technology very awkwardly address issues of political science.

  • A malware reverser will speculate on terrorist motives.
  • An expert with network traffic analysis will make guesses about organized crime operations.

When a journalist asks an expert in information security to explain the human science of an attack, such as cultural groups and influences involved, answers appear more like quips and jabs instead of deep thought from established human science or study.

This is unfortunate since I suspect a little reading or discussion would improve the situation dramatically.

My impression is there is no clear guide floating around, however. When I raise this issue I’ve been asked to put something together. So, given I spent my undergraduate and graduate degrees in the abyss of political philosophy (ethics of humanitarian intervention, “what kind of job will you do with that”), perhaps I can help here in the ways that I was taught.

Reading “The Three-Body Problem” would help, perhaps, but Chinese Sci-Fi seems too vague a place to start from…

Set against the backdrop of China’s Cultural Revolution, a secret military project sends signals into space to establish contact with aliens. An alien civilization on the brink of destruction captures the signal and plans to invade Earth. Meanwhile, on Earth, different camps start forming, planning to either welcome the superior beings and help them take over a world seen as corrupt, or to fight against the invasion.

I offer Chinese literature here mainly since many attempts to explain “American” hacker culture tend to start with Snowcrash or similar text.

Instead of that, I will attempt to give a far more clear example, which recently fell on my desk.

Say Silent Chollima One More Time

About two years ago a private company created by a wealthy businessman came out of stealth mode. It was launched with strong ties to the US government and ambitious goals to influence the world of information security investigations.

When 2013 kicked off CrowdStrike was barely known outside of inner-sanctum security circles. The stealth startup–founded by former Foundstone CEO, McAfee CTO, and co-author of the vaunted Hacking Exposed books George Kurtz–was essentially unveiled to the world at large at the RSA Security Conference in February.

Today in 2015 (2 years after the company was announced and 4 years after initial funding) take note of how they market the length of their projects/experience; they slyly claim work dating way back in 2006, at least 4 years before they existed.

Interviewer: What do you make of the FBI finding — and the president referred to it — that North Korea and North Korea alone was behind this attack?

CrowdStrike: At CrowdStrike, we absolutely agree with that. We have actually been tracking this actor. We actually call them Silent Chollima. That’s our name for this group based that is out of North Korea.

Interviewer: Say the name again.

Crowdstrike: Silent Chollima. Chollima is actually a national animal of North Korea. It’s a mythical flying horse. And we have been tracking this group since 2006.

Hold on to that “mythical flying horse” for a second. We need to talk about 2006.

CrowdStrike may have internally blended their own identity so much with the US government they do not realize those of us outside their gravy train business concept cringe when lines are blurred between a CrowdStrike marketing launch and government bureaus. I think hiring many people away from the US government still does not excuse such casual use of “we” when speaking about intelligence from before the 2013 company launch date.

Remember the mythical flying horse?

Ok, good, because word use and definitions matter greatly to political scientists. Reference to a mythological flying horse is a different kind of sly marketing. CrowdStrike adds heavy emphasis to their suspects and a leading characterization where none is required and probably shouldn’t be used. They want everyone to take note of what “we actually call” suspects without any sense of irony for this being propagandist.

Some of their “slyness” may be just examples in sloppy work, insensitive or silly labeling for convenience, rather than outright attempts designed to bias and change minds. Let’s look at their “meet the adversaries” page.

animal-adversaries

Again it looks like a tossup between sloppy work and intentional framing.

Look closely at the list. Anyone else find it strange that a country of Tiger is an India?

What kind of mythical animal is an India? Ok, but seriously, only the Chollima gets defined by CrowdStrike? I have to look up an India?

We can surmise that Iran (Persia) is being mocked as a Kitten while India gets labeled with a Tiger (perhaps a nod to Sambo) as some light-hearted back-slapping comedy by white men in America to lighten up the mood in CrowdStrike offices.

Long nights poring over forensic material, might as well start filing with pejorative names for foreign indicators because, duh, adversaries.

Political scientists say the words used to describe a suspect before a court trial heavily influence everyone’s views. An election also has this effect on deciding votes. Pakistan has some very interesting long-term studies of voting results from ballots for the illiterate, where candidates are assigned an icon.

Imagine a ballot for voting, and you are asked to choose between a poisonous snake or a fluffy kitten. This is a very real world example.

Vote for me!
Vote for me!

Social psychologists have a test they call Implicit Association that is used in numerous studies to measure response time (in milliseconds) of human subjects asked to pair word concepts. Depending on their background, people more quickly associate words like “kitten” with pleasant concepts, and “tiger” more quickly with unpleasant ideas. CrowdStrike above is literally creating the associations.

As an amusing aside it was an unfortunate tone-deaf marketing decision by top executives (mostly British) at EMC to name their flag-ship storage solution “Viper”. Nobody in India wanted to install a Viper in their data-centers, hopefully for obvious reasons.

Moreover, CrowdStrike makes no bones about saying someone they suspect is considered guilty until proven innocent. This unsavory political philosophy comes through clearly in another interview (where they also take a moment to throw Chollima into the dialogue):

We haven’t seen the skeptics produce any evidence that it wasn’t North Korea, because there is pretty good technical attribution here. […] North Korea is one of the few countries that doesn’t have a real animal as a national animal. […] Which, I think, tells you a lot about the country itself.

Let me highlight three statements here.

  1. We haven’t seen the skeptics produce any evidence that it wasn’t North Korea
  2. North Korea is one of the few countries that doesn’t have a real animal as a national animal.
  3. Which, I think, tells you a lot about the country itself.

We’re going to dive right into those.

I’ll leave the “pretty good technical attribution” statement alone here because I want to deal with that in a separate post.

Let’s break the remaining three sentences into two separate parts.

First: Skeptics Haven’t Produced Evidence

Is it a challenge for skeptics to produce counter-evidence? Bertrand Russell eloquently and completely destroyed such reasoning long ago. His simple celestial teapot analogy speaks for itself.

If I were to suggest that between the Earth and Mars there is a china teapot revolving about the sun in an elliptical orbit, nobody would be able to disprove my assertion provided I were careful to add that the teapot is too small to be revealed even by our most powerful telescopes. But if I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense.

This is the danger of ignoring lessons from basic political science, let alone its deeper philosophical underpinnings; you end up an information security “thought leader” talking absolute nonsense.

CrowdStrike may as well tell skeptics to produce evidence attacks aren’t from a flying horse.

The burden of proof logically and obviously remains with those who sit upon an unfalsifiable belief. As long as investigators offer statements like “we see evidence and you can’t” or “if only you could see what we see” then the burden can not easily and so negligently shift away.

Perhaps I also should bring in the proper, and sadly ironic, context to those who dismiss or silence the virtue of skepticism.

Studies of North Korean politics emphasize their leaders often justify total control while denying information to the public, silencing dissent and making skepticism punishable. In an RT documentary, for example, North Korean officers happily say they must do as they are told and they would not question authority because they have only a poor and partial view; they say only their dear leader can see all the evidence.

Skepticism should not be rebuked by investigators if they desire, as scientists tend to, for challenges to help them find truth. Perhaps it is fair to say CrowdStrike takes the very opposite approach of what we often call crowd source?

Analysts within the crowd who speak out as skeptics tend to be most practiced in the art of accurate thought, precisely because caution and doubt are not dismissed. Incompleteness is embraced and examined. This is explained with recent studies. Read, for example, a new study called “Psychology of Intelligence Analysis: Drivers of Prediction Accuracy in World Politics” that highlights how and why politics alter analyst conclusions.

Analysts also operate under bureaucratic-political pressure and are tempted to respond to previous mistakes by shifting their response thresholds. They are likelier to say “signal” when recently accused of underconnecting the dots (i.e., 9/11) and to say “noise” when recently accused of overconnecting the dots (i.e., weapons of mass destruction in Iraq). Tetlock and Mellers (2011) describe this process as accountability ping-pong.

Then consider an earlier study regarding what makes people into and “superforecasters” when they are accountable to a non-political measurement.

…accountability encourages careful thinking and reduces self-serving cognitive biases. Journalists, media dons and other pundits do not face such pressures. Today’s newsprint is, famously, tomorrow’s fish-and-chip wrapping, which means that columnists—despite their big audiences—are rarely grilled about their predictions after the fact. Indeed, Dr Tetlock found that the more famous his pundits were, the worse they did.

CrowdStrike is as famous as any company can get, as designed from flashy launch. Do they have any non-political, measured accountability to go with their pomp and circumstance?

Along with being skeptical, analysts sometimes are faulted for being grouchy. It turns out in other studies that people in bad moods remember more detail in investigations and provide more accurate facts, because they are skeptical. The next time you want to tell an analyst to brighten up, think about the harm to the quality of their work.

Be skeptical if you want to find the right answers in complex problems. And stay grouchy if you want to be more detail oriented.

Second: A Country Without a Real Animal

Going back to the interview statement by CrowdStrike, “one of the few countries” without “a real animal as a national animal” is factually easy to confirm. It seems most obviously false.

With a touch of my finger I find mythical national animals used in England, Wales, Scotland, Bhutan, China, Greece, Hungary, Indonesia, Iran, Portugal, Russia, Turkey, Vietnam…and the list goes on.

Don’t forget the Allies’ Chindits in WWII, for example. Their name came from corruption of the Burmese mythical chinthe, a lion-like creature (to symbolize a father lion slain by his half-lion son who wanted to please his human mother) that frequently guards Buddhist temples in pairs of statutes.

Chindits or Long Range Penetration Groups 1943-1944 were precursors to today’s military “special forces”. A Burmese national mythical beast was adopted as their name, as they were led by Orde Wingate in irregular warfare against the Japanese.

Even if I try to put myself in the shoes of someone making such a claim I find it impossible to see how use of national mythology could seem distinctly North Korean to anyone from anywhere else. It almost makes me laugh when I think this is a North Korean argument for false pride: “only we have a mythological national animal”.

The reverse also is awkward. Does anyone really vouch for a lack of any real national animal for this territory? In the mythical eight years of CrowdStrike surveillance (arguably two years) did anyone notice, for example, that Plestiodon coreensis stamps were issued (honoring a very real national lizard unique to North Korea) or the North Korean animation shows starring the very real Sciurus vulgaris and Martes zibellina (Squirrel and Hedgehog)?

From there, right off the top of my head, I think of national mythology frequently used in Russia (two-headed monster) and England (monster being killed):

russiastgeorgedragon2s

And then what about America using mythical beasts at all levels, from local to national. Like what does it say when a Houston “Astro” play against a Colorado “Rocky”? Are we really supposed to cheer for a mythical mountain beast, some kind of anthropomorphic purple triceratops, or is it better that Americans rally around a green space alien with antennae?

Come on CrowdStrike, where did you learn analysis?

At this point I am unsure whether to go on to the second half of the CrowdStrke statement. Someone who says national mythical animals are unique to North Korea is in no position to assert it “tells you a lot about the country itself”.

Putting myself again in their shoes, CrowdStrike may think they convey “fools in North Korea have false aspirations; people there should be more skeptical”.

Unfortunately the false uniqueness claim makes it hard to unravel who the fools really are. A little skepticism would have helped CrowdStrike realize mythology is universal, even at the national level. So what do we really learn when a nation has evidence of mythology?

In my 2012 Big Data Security presentations I touched on this briefly. I spoke to risks of over-confidence and belief in data that may undermine further analytic integrity. My example was the Griffin, a mythological animal (used by the Republic of Genoa, not to mention Greece and England).

Recent work by an archeologist suggests these legendary monsters were a creative interpretation by Gobi nomads of Protocerotops bones. Found during gold prospecting the unfamiliar bones turned into stories told to those they traded with, which spread further until many people were using Griffins in their architecture and crests.

Ok, so really mythology tells us that people everywhere are creative and imaginative with minds open to possibilities. People are dreamers and have aspirations. People stretch the truth and often make mistakes. The question is whether at some point a legend becomes hard or impossible to disprove.

A flying horse could symbolize North Koreans are fooled by shadows, or believe in legends, but who among us is not guilty of creativity to some degree? Creativity is the balance to skepticism and helps open the mind to possibilities not yet known or seen. It is not unique to any state but rather essential to the human pursuit of truth.

Be creative if you want to find the right answers in complex problems.

Third: Power Tools and Being More Informed Versus Better Informed

Intelligence and expertise in security, as you can see, does not automatically transfer to a foundation for sound political scientific thought. Scientists often barb each other about who has more difficult challenges to overcome, yet there are real challenges in everything.

I think it important to emphasize here that understanding human behavior is very different skill. Not a lessor skill, a different one. XKCD illustrates how a false or reverse-confidence test is often administered:

XKCD Imposter

Being the best brain surgeon does not automatically make someone an expert in writing laws any more than a political scientist would be an expert at cutting into your skull.

Basic skills in everything can be used to test for fraud (imposter exams) while the patience in more nebulous and open advanced thinking in every field can be abused. Multiplication tables for math need not be memorized because you can look them up to find true/false. So too with facts in political science, as I illustrated with mythology and symbolism for states. Quick, what’s your state animal?

Perhaps it is best said there are two modes to everything: things that are trivial and things that are not yet understood. The latter is what people mean when they say they have found something “sophisticated”.

There really are many good reasons for technical experts to quickly bone up on the long and detailed history of human science. Not least of them is to cut down propaganda and shadows, move beyond the flying horses, and uncover the best answers.

The examples I used above are very specific to current events in order to clarify what a problem looks like. Hopefully you see a problem to be solved and now are wondering how to avoid a similar mistake. If so, now I will try to briefly suggest ways to approach questions of political science: be skeptical, be creative. Some might say leave it to the professionals, the counter-intelligence experts. I say never stop trying. Do what you love and keep doing it.

Achieving a baseline to parse how power is handled should be an immediate measurable goal. Can you take an environment, parse who the actors are, what groups they affiliate with and their relationships? Perhaps you see already the convenient parallels to role based access or key distribution projects.

Aside from just being a well-rounded thinker, learning political science means developing powerful analytic tools that quickly and accurately capture and explain how power works.

Stateful Inspection (Pun Intended)

Power is the essence of political thought. The science of politics deals with understanding systems of governing, regulating power, of groups. Political thinking is everywhere, and has been forever, from the smallest group to the largest. Many different forms are possible. Both the framework of the organization and leadership can vary greatly.

Some teach mainly about relationships between states, because states historically have been a foundation to generation of power. This is problematic as old concepts grow older, especially in IT, given that no single agreed-upon definition of “state” yet exists.

Could users of a service ever be considered a state? Google might be the most vociferously and openly opposed to our old definitions of state. While some corporations engage with states and believe in collaboration with public services, Google appears to define state as an irrelevant localized tax hindering their global ambitions.

A major setback to this definition came when an intruder was detected moving about Google’s state-less global flat network to perpetrate IP theft. Google believed China was to blame and went to the US government for services; only too late the heads of Google realized state-level protection without a state affiliation could prove impossible. Here is a perfect example of Google engineering anti-state theory full of dangerous presumptions that court security disaster


google-domination

A state is arguably made up of people, who govern through representation of their wants and needs. Google sees benefits in taking all the power and owing nothing in return, doing as they please because they know best. An engineer that studied political science might quickly realize that removing ability for people to represent themselves as a state, forced to bend at the whim of a corporation, would be a reversal in fortune rather than progress.

It is thus very exciting to think how today technology can impact definitions for group membership and the boundaries of power. Take a look at an old dichotomy between nomadic and pastoral groups. Some travel often, others stay put. Now we look around and see basic technology concepts like remote management and virtual environments forcing a rethink of who belongs to what and where they really are at any moment in time.

Perhaps you remember how Amazon wanted to provide cloud services to the US government under ITAR requirements?

Amazon Web Services’ GovCloud puts federal data behind remote lock and key

The question of maintaining “state” information was raised because ITAR protects US secrets by requiring only citizens have access. Rather than fix the inability for their cloud to provide security at the required level, a dedicated private datacenter was created where only US citizens had keys. Physical separation. A more forward-thinking solution would have been to develop encryption and identity management solutions that avoided breaking up the cloud, while still complying with requirements.

This problem came up again in reverse when Microsoft was told by the US government to hand over data in Ireland. Had Microsoft built a private-key solution, linked to the national identity of users, they could have demonstrated an actual lack of access to that data. Instead you find Microsoft boasting to the public that state boundaries have been erased, your data moves with you wherever you go, while telling the US government that data in Ireland can’t be accessed.

Being stateful is not just a firewall concern, it really has roots in political science.

How Political is Clownstrike? An Ethics Test McAfee Ex-Execs Likely Can’t Pass

Does the idea of someone moving freely scare you more or a person who digs in for the long haul and claims proof of boundary violations where you see none?

Whereas territory used to be an essential characteristic of a state, today we wonder what membership and presence means when someone can remain always connected, not to mention their ability to roam within overlapping groups. Boundaries may form around nomads who carry their farms with them (i.e. playing FarmVille) and of course pastoralism changes when it moves freely without losing control (i.e. remote management of a Data Center).

Technology is fundamentally altering the things we used to rely upon to manage power. On the one hand this is of course a good thing. Survivability is an aim of security, reducing the impact of disaster by making our data more easily spread around and preserved. On the other hand this great benefit also poses a challenge to security. Confidentiality is another aim of security, controlling the spread of data and limiting preservation to reduce exposure. If I can move 31TB/hr (recent estimate) to protect data from being destroyed it also becomes harder to stop giant ex-filtration of data.

From an information security professional’s view the two sides tend to be played out in different types of power and groups. We rarely, if ever, see a backup expert in the same room as a web application security expert. Yet really it’s a sort of complicated balance that rests on top of trust and relationships, the sort of thing political scientists love to study.

With that in mind, notice how Listverse plays to popular fears with a top ten “Ominous State-Sponsored Hacker Group” article. See if you now, thinking about a balance of power between groups, can find flaws in their representation of security threats.

It is a great study. Here are a few questions that may help:

  • Why would someone use “ominous” to qualify “state-sponsored” unless there also exist non-ominous state-sponsored hacker groups?
  • Are there ominous hacker groups that lack state support? If so, could they out-compete state-sponsored ones? Why or why not? Could there be multiple-affiliations, such that hackers could be sponsored across states or switch states without detection?
  • What is the political relationship, the power balance, between those with a target surface that gives them power (potentially running insecure systems) and those who can more efficiently generate power to point out flaws?
  • How do our own political views affect our definitions and what we study?

I would love to keep going yet I fear this would strain too far the TL;DR intent of the post. Hopefully I have helped introduce someone, anyone (hi mom!), to the increasing need for combined practice in political science and information security. This is a massive topic and perhaps if there is interest I will build a more formal presentation with greater detail and examples.

Updated 19 January: added “The Psychology of Intelligence Analysis” citation and excerpt.

Automobile Control System “Eliminates” the Driver

All day today on Twitter I was twittering, tweeting, sending twats about the lack of historic perspective in the auto-auto, self-driving, driver-elimination…whatever you want to call this long-time coming vehicle drone industry.

Here is a good example:

Not many faves for my tweet of the old RCA radio controlled drone concept
Not many faves for my tweet of the old RCA radio controlled drone concept

“Control system” was all the rage for terminology in the 1960s, I guess. Must have sounded better back then than cybernetics, which was coined in the 1940s to mean control system.

Consistent terminology is hard. Marketing old ideas is too. No one would say control system today, would they? They certainly wouldn’t say auto-auto.

The words automobile and automotive already have the word auto built-in. Seems tragic we have forgotten why we put “auto” there in the first place. Auto mobile, not so auto anymore?

An old racing friend called this becoming “velocitized”. After you get used to things at a certain speed or way you lose touch. So the word auto is no longer impressive. We need to speed up again or we will start to feel like we’re standing still — need more auto.

And so the “auto” industry wants to become automated again but that sounds like failure so let’s come up with a new phrase. Sure history is being obscured but that’s the very problem of people becoming velocitized over time, so someone came up with “self-driving” to get our attention again.

Self-driving, aside from being disconnected from auto roots, sounds slightly sour and selfish because if it’s driving you or others around it really isn’t just “self” driving is it? What will conjugations of drone be for future generations? He, she, self-driving? Here comes the self-driving car. There goes the…not-just-self-driving car?

Sometimes I get stuck on stupid rule jokes, I know. Anyway the General Motors of 1956 offered the world “a future vision of driverless cars”

They called it the “far-off future of 1976”. No joke. Driverless cars by 1976. Crazy to think about that timeline today. Twenty years is all GM thought they’d need to get cars driving people around without a driver. No wait, I mean driving others around while driving themselves.

This wasn’t a drop in the pan idea. Within just a couple years RCA was in on the future vision promoting a wireless system for coordinating drones. The NYT front page of June 6, 1960 headline read:

Automobile Control System Eliminates the Driver

And it went so far as to back-date the research 7 years promising full use by the far off future of…wait for it…1975!

FRUIT OF 7 YEARS’ STUDY R.C.A. and G.M. Jointly Conducted It — Full Use Seen 15 Years Away

1960 NYT Cover Story on Driverless Cars

I want you to think carefully about a headline in 1960 that says robotic machines will “eliminate” humans. Hold that thought.

Going back 7 years would be 1953, which sounds like it would be the GM Firebird rocket-car concept with automobile-control-towers to avoid rocket collisions on roads. Thank goodness for humans in those control-towers.

1954 GM Firebird
1954 GM Firebird

By 1964 the idea of automation seemed to still be alive. GM’s Stiletto concept had rear-view cameras and ultra-sonic obstacle sensors. Surely those were a mere stepping-stone away from full drone. Or was there a slide backwards towards keeping human judgment in the mix?

1964 GM Stiletto dashboard
1964 GM Stiletto dashboard

Take a guess at what happened in the intervening years that might have changed the messaging.

If you said “Cuban Missile Crisis” you win a vehicle…that eliminates humans.

Robert McNamara, who sat at the US Cabinet Level during the crisis, said this about automation:

“Kennedy was rational. Khrushchev was rational. Castro was rational” and yet they were on a path that would push the world nearly to annihilation

McNamara then wisely predicted “rationality will not save us”.

Odd thing about that guy McNamara, he was a top executive at Ford motor company before he joined the Kennedy Presidential Cabinet.

Perhaps it now is easier to see when and why views on automobile automation shifted. Instead of full-speed ahead to 1974, as predicted, by 1968 you had popular culture generating future visions of 2001 where a self-driving space ship attempts to “eliminate” its human passengers.

Spoiler alert: Hal took the term “self-driving” too literally.

Moral of this post (and history) is don’t trust automation and choose your automation code words carefully. Beware especially the engineers who re-brand mistakes as being “too perfect” or completely rational, as if they don’t know who McNamara is or what he taught us. Because if you forget history you might be condemned to automate it.

The Beginning Wasn’t Full-Disclosure

An interesting personal account of vulnerability disclosure called “In the Beginning There was Full Disclosure” makes broad statements about the past.

In the beginning there was full disclosure, and there was only full disclosure, and we liked it.

I don’t know about you, but immediately my brain starts searching for a date. What year was this beginning?

No dates are given, only clues.

First clue, a reference to RFP.

So a guy named Rain Forest Puppy published the first Full Disclosure Policy promising to release vulnerabilities to vendors privately first but only so long as the vendors promised to fix things in a timely manner.

There may be earlier versions. The RFP document doesn’t have a date on it, but links suggest 2001. Lack of date seems a bit strange for a policy. I’ll settle on 2001 until another year pops up somewhere.

Second clue, vendors, meaning Microsoft

But vendors didn’t like this one bit and so Microsoft developed a policy on their own and called it Coordinated Disclosure.

This must have been after the Gates’ memo of 2002.

Both clues say the beginning was around 2000. That seems odd because software-based updates in computers trace back to 1968.

It also is odd to say the beginning was a Microsoft policy called Coordinated Disclosure. Microsoft says they released that in 2010.

Never mind 2010. Responsible disclosure was the first policy/concept at Microsoft because right after the Gates’ memo on security they mention it in 2003, discussing how Tavis Ormandy decided unilaterally to release a 0day on XP.

Thus all of the signals, as I dug through the remainder of the post, suggest vulnerability research beginning around 15 years ago. To be fair, the author gives a couple earlier references:

…a debate that has been raging in security circles for over a hundred years starting way back in the 1890s with the release of locksmithing information. An organization I was involved with, L0pht Heavy Industries, raised the debate again in the 1990’s as security researchers started finding vulnerabilities in products.

Yet these are too short a history (1890s wasn’t the first release of locksmith secrets) and not independent (L0pht takes credit for raising the debate around them) for my tastes.

Locksmith secrets are thousands of years old. Their disclosure follows. Pin-tumblers get called Egyptian locks because that’s where they are said to have originated; technically the Egyptians likely copied them out of Mesopotamia (today Iraq). Who believes Mesopotamia was unhappy their lock vulnerabilities were known? And that’s really only a tip of the iceberg for thousands of years disclosure history.

I hear L0pht taking credit again. Fair point. They raised a lot of awareness while many of us were locked in dungeons. They certainly marketed themselves well in the 1990s. No question there. Yet were they raising the debate or joining one already in progress?

To me the modern distributed systems debate raged much, much earlier. The 1968 Carterfone case, for example, ignited a whole generation seeking boundaries for “any lawful device” on public communication lines.

In 1992 Wietse Venema appeared quite adamant about the value of full disclosure, as if trying to argue it needs to happen. By 1993 he and Dan Farmer published the controversial paper “Improving the security of your site by breaking into it“.

They announced a vulnerability scanner that would be made public. It was the first of its kind. For me this was a turning point in the industry, trying to justify visibility in a formal paper and force open discussion of risk within an environment that mostly had preferred secret fixes. The public Emergency Response and Incident Advisory concepts still meant working with vendors on disclosure, which I will get to in a minute.

As a side-note the ISS founder claims to have written an earlier version of the same vulnerability scanner. Although possible, so far I have found nothing outside his own claims to back this up. SATAN has free and far wider recognition (i.e. USENIX paper) and also easily was found running in the early 1990s. I remember when ISS first announced in the mid 1990s, it appeared to be a commercial version of SATAN that did not even try to distinguish or back-date itself.

But I digress. Disclosure of vulnerabilities in 1992 felt very controversial. Those I found were very hush and the steeped ethical discussions of exposing weakness were clearly captured in Venema/Farmer paper. There definitely was still secrecy and not yet a full-disclosure climate.

Just to confirm I am not losing my memory, I ran a few searches on an old vulnerability disclosure list, the CIAC. Sure enough, right away I noticed secretive examples. January 4, 1990 Texas Instruments D3 Process Control System gives no details, only:

TI Vuln Disclosure

Also in January 1990, Apple has the same type of vulnerability notice.

Even more to the point, and speaking of SATAN, I also noticed HP using a pre-release notice. This confirms for me my memory isn’t far off; full disclosure was not a norm. HP issues a notice before the researcher made the vulnerabilities public.

HP SATAN

Vendors shifted how they respond not because a researcher released a vulnerability under pride of full disclosure, which a vendor had powerful legal and technical tools to dispute. Rather SATAN changed the economics of disclosure by making the discussion with a vendor about self-protection through awareness first-person and free.

Anyone could generate a new report, anywhere, anytime so the major vendors had to contemplate the value of responding to an overall “assessment” relative to other vendors.

Anyway, great thoughts on disclosure from the other blog, despite the difference on when and how our practices started. I am ancient in Internet years and perhaps more prone than most to dispute historic facts. Thus I encourage everyone to search early disclosures for further perspective on a “Beginning” and how things used to run.

Updates:

@ErrataRob points out SATAN was automating what CERT had already outed, the BUGTRAQ mailing list (started in 1993) was meant to crowd-source disclosures after CERT wasn’t doing it very well. Before CERT people traded vulns for a long time in secret. CERT made it harder, but it was BUGTRAQ that really shutdown trading because it was so easy to report.

@4Dgifts points out discussion of vulns on comp.unix.security USENET news started around 1984

@4Dgifts points out a December 1994 debate where the norm clearly was not full-disclosure. The author even suggests blackhats masquerade as whitehats to get early access to exploits.

All that aside, it is not my position to send out full disclosure, much as I might like to. What I sent to CERT was properly channeled through SCO’s CERT contact. CERT is a recognized and official carrier for such materials. 8LGM is, I don’t know, some former “black hat” types who are trying pretty hard to wear what looks like a “white hat” these days, but who can tell? If CERT believes in you then I assume you’ll be receiving a copy of my paper from them; if not, well, I know you’re smart enough to figure it out anyway.

[…]

Have a little patience. Let the fixed code propagate for a while. Give administrators in far off corners of the world a chance to hear about this and put up defenses. Also, let the gory details circulate via CERT for a while — just because SCO has issued fixes does not mean there aren’t other vendors whose code is still vulnerable. If you think this leaves out the freeware community, think again. The people who maintain the various login suites and other such publically available utilities should be in contact with CERT just as commercial vendors are; they should receive this information through the same relatively secure conduits. They should have a chance to examine their code and if necessary, distribute corrected binaries and/or sources before disclosure. (I realize that distributing fixed sources is very similar to disclosure, but it’s not quite the same as posting exploitation scripts).

US President Calls for Federal 30-day Breach Notice

Today the US moved closer to a federal consumer data breach notification requirement (healthcare has had a federal requirement since 2009 — see Eisenhower v Riverside for why healthcare is different from consumer).

PC World says a presentation to the Federal Trade Commission sets the stage for a Personal Data Notification & Protection Act (PDNPA).

U.S. President Barack Obama is expected to call Monday for new federal legislation requiring hacked private companies to report quickly the compromise of consumer data.

Every state in America has had a different approach to breach deadlines, typically led by California (starting in 2003 with SB1386 consumer breach notification), and more recently led by healthcare. This seems like an approach that has given the Feds time to reflect on what is working before they propose a single standard.

In 2008 California moved to a more aggressive 5-day notification requirement for healthcare breaches after a crackdown on UCLA executive management missteps in the infamous Farah Fawcett breaches (under Gov Schwarzenegger).

California this month (AB1755, effective January 2015, approved by the Governor September 2014) relaxed its healthcare breach rules from 5 to 15 days after reviewing 5 years of pushback on interpretations and fines.

For example, in April 2010, the CDPH issued a notice assessing the maximum $250,000 penalty against a hospital for failure to timely report a breach incident involving the theft of a laptop on January 11, 2010. The hospital had reported the incident to the CDPH on February 19, 2010, and notified affected patients on February 26, 2010. According to the CDPH, the hospital had “confirmed” the breach on February 1, 2010, when it completed its forensic analysis of the information on the laptop, and was therefore required to report the incident to affected patients and the CDPH no later than February 8, 2010—five (5) business days after “detecting” the breach. Thus, by reporting the incident on February 19, 2010, the hospital had failed to report the incident for eleven (11) days following the five (5) business day deadline. However, the hospital disputed the $250,000 penalty and later executed a settlement agreement with the CDPH under which it agreed to pay a total of $1,100 for failure to timely report the incident to the CDPH and affected patients. Although neither the CDPH nor the hospital commented on the settlement agreement, the CDPH reportedly acknowledged that the original $250,000 penalty was an error discovered during the appeal process, and that the correct calculation of the penalty amount should have been $100 per day multiplied by the number of days the hospital failed to report the incident to the CDPH for a total of $1,100.

It is obvious too long a timeline hurts consumers. Too short a timeline has been proven to force mistakes with covered entities rushing to conclusion then sinking time into recovering unjust fines and repairing reputation.

Another risk with too short timelines (and complaint you will hear from investigation companies) is that early-notification reduces good/secret investigations (e.g. criminals will erase tracks). This is a valid criticism, however it does not clearly outweigh benefits to victims of early notification.

First, a law-enforcement delay caveat is meant to address this concern. AB1755 allows a report to be submitted 15 days after the end of a law-enforcement imposed delay period, similar to caveats found in prior requirements to assist important investigations.

Second, we have not seen huge improvements in attribution/accuracy after extended investigation time, mostly because politics start to settle in. I am reminded of when Walmart in 2009 admitted to a 2005 breach. Apparently they used the time to prove they did not have to report credit card theft.

Third, value relative to the objective of protecting data from breach. Consider the 30-day Mandiant 2012 report for South Carolina Department of Revenue. It ultimately was unable to figure out who attacked (although they still hinted at China). It is doubtful any more time would have resolved that question. The AP has reported Mandiant charged $500K or higher and it also is doubtful many will find such high costs justified. Compare their investigation rate with the cost of improving victim protection:

Last month, officials said the Department of Revenue completed installing the new multi-password system, which cost about $12,000, and began the process of encrypting all sensitive data, a process that could take 90 days.

I submit to you that a reasonably short and focused investigation time saves money and protects consumers early. Delay for private investigation brings little benefit to those impacted. Fundamentally who attacked tends to be less important that how a breach happened; determining how takes a lot less time to investigate. As an investigator I always want to get to the who, yet I recognize this is not in the best interest of those suffering. So we see diminishing value in waiting, increased value in notification. Best to apply fast pressure and 30 days seems reasonable enough to allow investigations to reach conclusive and beneficial results.

Internationally Singapore has the shortest deadline I know of with just 48-hours. If anyone thinks keeping track of all the US state requirements has been confusing, working globally gets really interesting.

Update, Jan 13:

Brian Krebs blogs his concerns about the announcement:

Leaving aside the weighty question of federal preemption, I’d like to see a discussion here and elsewhere about a requirement which mandates that companies disclose how they got breached. Naturally, we wouldn’t expect companies to disclose publicly the specific technologies they’re using in a public breach document. Additionally, forensics firms called in to investigate aren’t always able to precisely pinpoint the cause or source of the breach.

First, federal preemption of state laws sounds worse than it probably is. Covered entities of course want more local control at first, to weigh in heavily on politicians and set the rule. Yet look at how AB1755 in California unfolded. The medical lobby tried to get the notification moved from 5 days to 60 days and ended up on 15. A Federal 30 day rule, even where preemptive, isn’t completely out of the blue.

Second, disclosure of “how” a breach happened is a separate issue. The payment industry is the most advanced in this area of regulation; they have a council that releases detailed methods privately in bulletins. The FBI also has private methods to notify entities of what to change. Even so, generic bulletins are often sufficient to be actionable. That is why I mentioned the South Carolina report earlier. Here you can see useful details are public despite their applicability:

Mandiant Breach Report on SCDR

Obama also today is expected to make a case in front of the NCCIC for better collaboration between private and government sectors (Press Release). This will be the forum for this separate issue. It reminds me of the 1980s debate about control of the Internet led by Rep Glickman and decided by President Reagan. The outcome was a new NIST and the awful CFAA. Let’s see if we can do better this time.

Letters From the Whitehouse: