Where is the Revolution in Intelligence? Public, Private or Shared?

Watching Richard Bejtlich’s recent “Revolution in Intelligence” talk about his government training and the ease of attribution is very enjoyable, although at times for me it brought to mind CIA factbook errors in the early 1990s.

Slides that go along with the video are available on Google drive

Let me say, to get this post off the ground, I will be the first one to stand up and defend US government officials as competent and highly skilled professionals. Yet I also will call out an error when I see one. This post is essentially that. Bejtlich is great, yet he often makes some silly errors.

Often I see people characterize a government as made up of inefficient troglodytes falling behind. That’s annoying. Meanwhile often I also see people lionize nation-state capabilities as superior to any other organization. Also annoying. The truth is somewhere in between. Sometimes the government does great work, sometimes it blows compared to private sector.

Take the CIA factbook I mentioned above as an example. It has been unclassified since the 1970s and by the early 1990s it was published on the web. Given wider distribution its “facts” came under closer scrutiny from academics. So non-gov people who long had studied places or lived in them (arguably the world’s true leading experts) read this fact book and wanted to help improve it — outsiders looking in and offering assistance. Perhaps some of you remember the “official” intelligence peddled by the US government at that time?

Bejtlich in his talk gives a nod towards academia being a thorough environment and even offers several criteria for why academic work is superior to some other governments (not realizing he should include his own). Perhaps this is because he is now working on a PhD. I mean it is odd to me he fails to realize this academic community was just as prolific and useful in the 1990s, gathering intelligence and publishing it, giving talks and sending documents to those who were interested. His presentation makes it sound like before search engines appeared it required nation-state sized military departments walking uphill both ways in a blizzard to gather data.

Aside from having this giant blind spot to what he calls the “outsider” community, I also fear I am listening to someone with no field experience gathering intelligence. Sure image analysis is a skill. Sure we can sit in a room and pore over every detail to build up a report on some faraway land. On one of my private sector security teams I had a former US Air Force technician who developed film from surveillance planes. He hated interacting with people, loved being in the darkroom. But what does Bejtlich think of actually walking into an environment as an equal, being on the ground, living among people, as a measure of “insider” intelligence skill?

Almost three decades ago I stepped off a plane into a crowd of unfamiliar faces in a small country in Asia. Over the next five weeks I embedded myself into mountain villages, lived with families on the great plains, wandered with groups through jungles and gathered as much information as I could on the decline of monarchial rule in the face of democratic pressure.

One sunny day on the side of a shoulder-mountain stands out in my memory. As I hiked down a dusty trail a teenage boy dressed all in black walked towards me. He carried a small book under his arm. He didn’t speak English. We communicated in broken phrases and hand gestures. He said he was a member of a new party.

Mao was his leader, he said. The poor villages felt they weren’t treated well, decided to do something about it. I asked about Lenin. The boy had never heard the name. Stalin? Again the boy didn’t know. Mao was the inspiration for his life and he was pleased about this future for his village.

This was before the 1990s. And by most “official” accounts there were no studies or theories about Maoists in this region until at least ten years later. I mention this here not because individual people with a little fieldwork can make a discovery. It should be obvious military schools don’t have a monopoly on intel. The question is what happened to that data. Where did information go and who asked about it? Did others have easy access to data gathered?

Yes, someone from private sector should talk about “The Revolution in Private Sector Intelligence”. Perhaps we can find someone with experience working on intelligence in the private sector for many, many years, to tell us what has changed for them. Maybe there will be stories of pre-ChoicePoint private sector missions to fly in on a moment’s notice into random places to gather intelligence on employees who were stealing money and IP. And maybe non-military experience will unravel why Russian operations in private sector had to be handled uniquely from other countries?

Going by Bejtlich’s talk it would seem that such information gathering simply didn’t exist if the US government wasn’t the one doing it. What I hear from his perspective is you go to a military school that teaches you how to do intelligence. And then you graduate and then you work in a military office. Then you leave that office to teach outsiders because they can learn too.

He sounds genuinely incredulous to discover that someone in the private sector is trainspotting. If you are familiar with the term you know many people enjoy as a hobby building highly detailed and very accurate logs of transportation. Bejtlich apparently is unaware, despite this being a well-known thing for a very long time.

A new record of trainspotting has been discovered from 1861, 80 years earlier than the hobby was first thought to have begun. The National Railway Museum found a reference to a 14 year old girl writing down the numbers of engines heading in and out of Paddington Station.

It reminds me a bit of how things must have moved away from military intelligence for the London School of Oriental and African Studies (now just called SOAS). The British cleverly setup in London a unique training school during the first World War, as explained in the 1917 publication “Nature”:

…war has opened our eyes to the necessity of making an effort to compete vigorously with the activities — political, commercial, and even scientific and linguistic — of the Germans in Asia and Africa. We have discovered that their industry was rarely disinterested, and that political propaganda was too often at the root of “peaceful penetration” in the field of missionary, scientific, and linguistic effort.

In other words, a counter-intelligence school was born. Here the empire could maintain its military grip around the world by developing the skills to better gather intelligence and understand enemy culture (German then, but ultimately native).

By the 1970s SOAS, a function of the rapidly changing British global position, seemed to take on wider purpose. It reached out and looked at new definitions of who might benefit from the study and art of intelligence gathering. By 1992 regulars like you or me could attend and sit within the shell of the former hulk of a global analysis engine. Academics there focused on intelligence gathering related to revolution and independence (e.g. how to maintain profits in trade without being a colonial power).

I was asked by one professor to consider staying on for a PhD to help peel apart Ghana’s 1956 transition away from colonial rule, for only academic purpose of course. Tempted as I was, LSE instead set the next chapters of my study, which itself seems to have become known sometime during the second World War as a public/private shared intelligence analyst training school (Bletchley Park staff tried to convince me Zygalski, inventor of equipment to break the Enigma, lectured at LSE although I could find no records to support that claim).

Fast forward five years to 1997 and the Corner House is a good example of academics in London who formalized public intelligence reports (starting in 1993?) into a commercial portfolio. In their case an “enemy” was more along the lines of companies or even countries harming the environment. This example might seem a bit tangential until you ask someone for expert insights, including field experience, to better understand the infamous pipeline caught in a cyberwar.

Anyway, without me droning on and on about the richness in an “outside” world, Bejtlich does a fine job describing some of the issues he had adjusting. He just seems to have been blind to communities outside his own and is pleased to now be discovering them. His “inside” perspective on intelligence is really just his view of inside/outside, rather than any absolute one. Despite pointing out how highly he regards academics who source material widely he then unfortunately doesn’t follow his own advice. His talk would have been so much better with a wee bit more depth of field and some history.

Let me drag into this an interesting example that may help make my point, that private analysts not only can be as good or better than government they may even be just as secretive and political.

Eastman Kodak investigated, and found something mighty peculiar: the corn husks from Indiana they were using as packing materials were contaminated with the radioactive isotope iodine-131 (I-131). Eastman Kodak at the time had some of the best researchers in the country on its team (the company even had its own nuclear reactor in the 1970s), and they discovered something that was not public knowledge: those farms in Indiana had been exposed to fallout from the 1945 Trinity Test in New Mexico — the world’s first atmospheric nuclear bomb explosions which ushered in the atomic age. Kodak kept this exposure silent.

The American film industry giant by 1946 realized, from clever digging into the corn husk material used for packaging, that the US government was poisoning its citizens. The company filed a formal complaint and kept quiet. Our government responded by warning Kodak of military research to help them understand how to hide from the public any signs of dangerous nuclear fallout.

Good work by the private sector helping the government more secretly screw the American public without detection, if you see what I mean.

My point is we do not need to say the government gives us the best capability for world-class intelligence skills. Putting pride aside there may be a wider world of training. So we also should not say private-sector makes someone the best in world at uncovering the many and ongoing flaws in government intelligence. Top skills can be achieved in different schools of thought, which serve different purposes. Kodak clearly worried about assets differently than the US government, while they still kind of ended up worrying about the same thing (colluding, if you will). Hard to say who evolved faster.

By the way, speaking of relativity, also I find it amusing Bejtlich’s talk is laced with his political preferences as landmines: Hillary Clinton is setup as so obviously guilty of dumb errors you’d be a fool not to convict her. President Obama is portrayed as maliciously sweeping present and clear danger of terrorism under the carpet, putting us all in grave danger.

And last but not least we’re led to believe if we get a scary black bag indicator we should suspect someone who had something to do with Krav Maga (historians might say an Austro-Hungarian or at least Slovakian man, but I’m sure we are supposed to think Israeli). Is that kind of like saying someone who had something to do with Karate (Bruce Lee!) when hinting at America?

And one last thought. Bejtlich also mentions gathering intelligence on soldiers in the Civil War as if it would be like waiting for letters in the mail. In fact there were many more routes of “real time” information. Soldiers were skilled at sneaking behind lines (pun not intended) tapping copper wires and listening, then riding back with updates. Poetry was a common method of passing time before a battle by creating clever turns of phrase about current events, perhaps a bit like twitter functions today. “Deserters” were a frequent source of updates as well, carrying news across lines.

I get what Bejtlich is trying to say about speed of information today being faster and have to technically agree with that one aspect of a revolution; of course he’s right about raw speed of a photo being posted to the Internet and seen by an analyst. Yet we shouldn’t under-sell what constituted “real-time” 150 years ago, especially if we think about those first trainspotters…

Hillary, Official Data Classification, and Personal Servers

The debate over Hillary Clinton’s use of email reminds me of a Goldilocks’ tech management dilemma. Users tend to think you are running too slow or too fast, never just right:

Too slow

You face user ire, potential revolt, as IT (let alone security) becomes seen as the obstacle to progress. Users want access to get their job done faster, better, etc. so they push data to cloud and apps, bring in their own devices and run like they have no fear because trust is shifted into clever new service providers.

We all know that has been the dominant trend and anyone caught saying “blackberry is safer” is at risk of being kicked out of the cool technology clubs. Even more to the point you have many security thought leaders saying over and over to choose cloud and ipad because safer.

I mentioned this in a blog post in 2011 when the Apple iPad was magically “waived” through security assessments for USAID.

Today it seems ironic to look back at Hillary’s ire. We expect our progressive politicians to look for modernization opportunities and here is a perfect example:

Many U.S. Agency for International Development workers are using iPads–a fact that recently drew the ire of Secretary of State Hillary Clinton when she sat next to a USAID official on a plane, said Jerry Horton, chief information officer at USAID. Horton spoke April 7 at a cloud computing forum at the National Institute of Standards and Technology in Gaithersburg, Md.

Clinton wanted to know why a USAID official could have an iPad while State Department officials still can’t. The secret, apparently, lies in the extensive use of waivers. It’s “hard to dot all the Is and cross all the Ts,” Horton said, admitting that not all USAID networked devices are formally certified and accredited under Federal Information Security Management Act.

“We are not DHS. We are not DoD,” he said.

While the State Department requires high-risk cybersecurity, USAID’s requirements are much lower, said Horton. “And for what is high-security it better be on SIPR.”

Modernizing, innovating, asking for government to reform is a risky venture. At the time I don’t remember anyone saying Hillary was being too risky, or her ire was misplaced in asking for technology improvements. There was a distinct lack of critique heard, despite my blog post sitting in the top three search results on Google for weeks. If anything I heard the opposite, that the government should trust and catch up to Apple’s latest whatever.

Too fast

Now let’s look at the other perspective. Dump the old safe and trusted Blackberry so you can let users consume iPads like candy going out of style, and you face watching them stumble and fall on their diabetic face. Consumption of data is the goal and yet it also is the danger.

Without getting into too many of the weeds for the blame game, figuring out who is responsible for a disaster, it may be better to look at why there will be accidents/misunderstandings in a highly politicized environment.

What will help us make sure we avoid someone extracting data off SIPR/NIPR without realizing there is a “TS/SAP” classification incident ahead? I mean what if the majority of data in question pertain to a controversial program, let say for example drones in Pakistan, which may or may not be secret depending on one’s politics. Colin Powell gives us some insight to the problem:

…emails were discovered during a State Department review of the email practices of the past five secretaries of state. It found that Powell received two emails that were classified and that the “immediate staff” working for Rice received 10 emails that were classified.

The information was deemed either “secret” or “confidential,” according to the report, which was viewed by CNN.

In all the cases, however — as well as Clinton’s — the information was not marked “classified” at the time the emails were sent, according to State Department investigators.

Powell noted that point in a statement on Thursday.

“The State Department cannot now say they were classified then because they weren’t,” Powell said. “If the Department wishes to say a dozen years later they should have been classified that is an opinion of the Department that I do not share.”

“I have reviewed the messages and I do not see what makes them classified,” Powell said.

This classification game is at the heart of the issue. Reclassification happens. Aggregate classification of not secret data can make it secret. If we characterize it as a judgment flaw by only one person, or even three, we may be postponing the critical need to review where there are wider systemic issues in decision-making and tools.

To paraphrase the ever insightful Daniel Barth-Jones: smart people at the top of their political game who make mistakes aren’t “stupid”; we have to evaluate whether systems that don’t prevent mistakes by design are….

Just right

Assuming we agree want to go faster than “too slow”, and we do not to run ahead “too fast” into disasters…a middle ground needs to come into better focus.

Giving up “too slow” means a move away from blocking change. And I don’t mean achieving FISMA certification. That is seen as a tedious low bar for security rather than the right vehicle for helping push towards the top end. We need to take compliance seriously as a guide as we also embrace hypothesis, creative thinking, to tease out a reasonable compromise.

We’re still very early in the dinosaur days of classification technology, sitting all the way over by the slow end of the equation. I’ve researched solutions for years, seen some of the best engines in the world (Varonis, Olive), and it’s not yet looking great. We have many more tough problems to solve, leaving open a market ripe for innovation.

Note the disclaimer on Microsoft’s “Data Classification Toolkit

Use of the Microsoft Data Classification Toolkit does not constitute advice from an auditor, accountant, attorney or other compliance professional, and does not guarantee fulfillment of your organization’s legal or compliance obligations. Conformance with these obligations requires input and interpretation by your organization’s compliance professionals.

Let me explain the problem by way of analogy, to be brief.

Cutting-edge research on robots focuses on predictive capabilities to enable driving off-road free from human control. A robot starts with near-field sensors, which gets them about 20 feet of vision ahead to avoid immediate danger. Then the robot needs to see much further to avoid danger altogether.

This really is the future of risk classification. The better your classification of risks, the better your predictive plan, and the less you have to make time-pressured disaster avoidance decisions. And of course being driver-less is a relative term. These automation systems still need human input.

In a DARPA LAGR Program video the narrator puts it simply:

A short-sighted robot makes poor decisions

Imagine longer-range vision algorithms that generate an “optimal path”, applied to massive amounts of data (different classes of email messages instead of trees and rocks in the great outdoors), dictating what you actually get to see.

LAGR-view

What I like about this optimal path illustration is the perpendicular alignment of two types of vision. The visible world is flat. And then there is the greater, optimal path theory, presented as a wall-like circle, easily queried without actually being “seen”. This is like putting your faith in a map because you can’t actually see all the way from San Francisco to New York.

The difference between the short and long highlights why any future of safe autonomous systems will depend on processing power of the end nodes, such that they can both create a larger areas of more “flat” rings as well as build out the “taller” optimal paths.

Here is where “personal” servers come into play. Power becomes a determinant of vision and autonomy. Personal investments often can increase processing power faster than government bureaucracy and depreciation schedules. I mean if the back-end system looks at the ground ahead and classifies as sand (unsafe to proceed), and the autonomous device does its own assessment on its own servers and decides it is looking at asphalt (safe for speed), who is right?

The better the predictive algorithms the taller the walls of vision into the future, and that begs for power and performance enhancements. Back to the start of this post, when IT isn’t providing users the kind of power they want for speed, we see users move their workloads towards BYOD and cloud. Classification becomes a power struggle, as forward-looking decisions depend on reliable data classification from an authoritative source.

If authoritative back-end services accidentally classify data safe and later reverse to unsafe (or vice-versa) the nodes/people depending on a classification service should not be the only target in an investigation of judgement error.

We can joke about how proper analysis always would chose a “just right” Goldilocks long-term path, yet in reality the debate is about building a high-performance data classification system that reduces her cost of error.