Category Archives: Security

Hillary, Official Data Classification, and Personal Servers

The debate over Hillary Clinton’s use of email reminds me of a Goldilocks’ tech management dilemma. Users tend to think you are running too slow or too fast, never just right:

Too slow

You face user ire, potential revolt, as IT (let alone security) becomes seen as the obstacle to progress. Users want access to get their job done faster, better, etc. so they push data to cloud and apps, bring in their own devices and run like they have no fear because trust is shifted into clever new service providers.

We all know that has been the dominant trend and anyone caught saying “blackberry is safer” is at risk of being kicked out of the cool technology clubs. Even more to the point you have many security thought leaders saying over and over to choose cloud and ipad because safer.

I mentioned this in a blog post in 2011 when the Apple iPad was magically “waived” through security assessments for USAID.

Today it seems ironic to look back at Hillary’s ire. We expect our progressive politicians to look for modernization opportunities and here is a perfect example:

Many U.S. Agency for International Development workers are using iPads–a fact that recently drew the ire of Secretary of State Hillary Clinton when she sat next to a USAID official on a plane, said Jerry Horton, chief information officer at USAID. Horton spoke April 7 at a cloud computing forum at the National Institute of Standards and Technology in Gaithersburg, Md.

Clinton wanted to know why a USAID official could have an iPad while State Department officials still can’t. The secret, apparently, lies in the extensive use of waivers. It’s “hard to dot all the Is and cross all the Ts,” Horton said, admitting that not all USAID networked devices are formally certified and accredited under Federal Information Security Management Act.

“We are not DHS. We are not DoD,” he said.

While the State Department requires high-risk cybersecurity, USAID’s requirements are much lower, said Horton. “And for what is high-security it better be on SIPR.”

Modernizing, innovating, asking for government to reform is a risky venture. At the time I don’t remember anyone saying Hillary was being too risky, or her ire was misplaced in asking for technology improvements. There was a distinct lack of critique heard, despite my blog post sitting in the top three search results on Google for weeks. If anything I heard the opposite, that the government should trust and catch up to Apple’s latest whatever.

Too fast

Now let’s look at the other perspective. Dump the old safe and trusted Blackberry so you can let users consume iPads like candy going out of style, and you face watching them stumble and fall on their diabetic face. Consumption of data is the goal and yet it also is the danger.

Without getting into too many of the weeds for the blame game, figuring out who is responsible for a disaster, it may be better to look at why there will be accidents/misunderstandings in a highly politicized environment.

What will help us make sure we avoid someone extracting data off SIPR/NIPR without realizing there is a “TS/SAP” classification incident ahead? I mean what if the majority of data in question pertain to a controversial program, let say for example drones in Pakistan, which may or may not be secret depending on one’s politics. Colin Powell gives us some insight to the problem:

…emails were discovered during a State Department review of the email practices of the past five secretaries of state. It found that Powell received two emails that were classified and that the “immediate staff” working for Rice received 10 emails that were classified.

The information was deemed either “secret” or “confidential,” according to the report, which was viewed by CNN.

In all the cases, however — as well as Clinton’s — the information was not marked “classified” at the time the emails were sent, according to State Department investigators.

Powell noted that point in a statement on Thursday.

“The State Department cannot now say they were classified then because they weren’t,” Powell said. “If the Department wishes to say a dozen years later they should have been classified that is an opinion of the Department that I do not share.”

“I have reviewed the messages and I do not see what makes them classified,” Powell said.

This classification game is at the heart of the issue. Reclassification happens. Aggregate classification of not secret data can make it secret. If we characterize it as a judgment flaw by only one person, or even three, we may be postponing the critical need to review where there are wider systemic issues in decision-making and tools.

To paraphrase the ever insightful Daniel Barth-Jones: smart people at the top of their political game who make mistakes aren’t “stupid”; we have to evaluate whether systems that don’t prevent mistakes by design are….

Just right

Assuming we agree want to go faster than “too slow”, and we do not to run ahead “too fast” into disasters…a middle ground needs to come into better focus.

Giving up “too slow” means a move away from blocking change. And I don’t mean achieving FISMA certification. That is seen as a tedious low bar for security rather than the right vehicle for helping push towards the top end. We need to take compliance seriously as a guide as we also embrace hypothesis, creative thinking, to tease out a reasonable compromise.

We’re still very early in the dinosaur days of classification technology, sitting all the way over by the slow end of the equation. I’ve researched solutions for years, seen some of the best engines in the world (Varonis, Olive), and it’s not yet looking great. We have many more tough problems to solve, leaving open a market ripe for innovation.

Note the disclaimer on Microsoft’s “Data Classification Toolkit

Use of the Microsoft Data Classification Toolkit does not constitute advice from an auditor, accountant, attorney or other compliance professional, and does not guarantee fulfillment of your organization’s legal or compliance obligations. Conformance with these obligations requires input and interpretation by your organization’s compliance professionals.

Let me explain the problem by way of analogy, to be brief.

Cutting-edge research on robots focuses on predictive capabilities to enable driving off-road free from human control. A robot starts with near-field sensors, which gets them about 20 feet of vision ahead to avoid immediate danger. Then the robot needs to see much further to avoid danger altogether.

This really is the future of risk classification. The better your classification of risks, the better your predictive plan, and the less you have to make time-pressured disaster avoidance decisions. And of course being driver-less is a relative term. These automation systems still need human input.

In a DARPA LAGR Program video the narrator puts it simply:

A short-sighted robot makes poor decisions

Imagine longer-range vision algorithms that generate an “optimal path”, applied to massive amounts of data (different classes of email messages instead of trees and rocks in the great outdoors), dictating what you actually get to see.

LAGR-view

What I like about this optimal path illustration is the perpendicular alignment of two types of vision. The visible world is flat. And then there is the greater, optimal path theory, presented as a wall-like circle, easily queried without actually being “seen”. This is like putting your faith in a map because you can’t actually see all the way from San Francisco to New York.

The difference between the short and long highlights why any future of safe autonomous systems will depend on processing power of the end nodes, such that they can both create a larger areas of more “flat” rings as well as build out the “taller” optimal paths.

Here is where “personal” servers come into play. Power becomes a determinant of vision and autonomy. Personal investments often can increase processing power faster than government bureaucracy and depreciation schedules. I mean if the back-end system looks at the ground ahead and classifies as sand (unsafe to proceed), and the autonomous device does its own assessment on its own servers and decides it is looking at asphalt (safe for speed), who is right?

The better the predictive algorithms the taller the walls of vision into the future, and that begs for power and performance enhancements. Back to the start of this post, when IT isn’t providing users the kind of power they want for speed, we see users move their workloads towards BYOD and cloud. Classification becomes a power struggle, as forward-looking decisions depend on reliable data classification from an authoritative source.

If authoritative back-end services accidentally classify data safe and later reverse to unsafe (or vice-versa) the nodes/people depending on a classification service should not be the only target in an investigation of judgement error.

We can joke about how proper analysis always would chose a “just right” Goldilocks long-term path, yet in reality the debate is about building a high-performance data classification system that reduces her cost of error.

BBC’s false history of long distance communication

One might think history would be trivially easy, given how these days every fact is on the Internet at the tips of our fingers. However, being a historian still takes effort, perhaps even talent. Why?

The answer is simple: “the value of education is not the learning of many facts but the ability of the mind to think”. I’ll let you try and search to figure out the person who said that.

A historian is trained to apply expertise in thinking, run facts through a system of sound logic for others to validate, rather than just leave facts on their own. It is a bit like a chef cooking a delicious meal rather than offering you a bowl of raw ingredients. Analysis to get the right combinations of ingredients cooked together can be hard. And on top of finding the results desirable, we also need ways to know the preparations were clean an can be trusted.

Take for example a BBC magazine article written about long distance communication, that cooks up a soup called “How Napoleon’s semaphore telegraph changed the world”.

This article unfortunately offers factual conclusions that are poorly prepared and end up tasting all wrong. Let’s start with three basic assertions the BBC has asked readers to swallow:

  1. The last stations were built in 1849, but by then it was clear that the days of line-of-sight telegraphy were done.
  2. The military needs had disappeared, and latterly the operators’ main task was transmitting national lottery numbers.
  3. The shortcomings of visual communication were obvious. It only functioned in daytime and in good weather.

First point: Line-of-sight telegraphy is still used to this day. Anyone sailing the Thames, or any modern waterway for that matter, would happily tell you they rely on a system of lights and flags. I wrote it into our book on cloud security. The BBC itself has a story about semaphore adoption during nuclear disarmament campaigns. As long as we have visual sensors, these signal days will never be done. Dare I mention the line-of-sight communication scene in a futuristic sci-fi film The Martian?

Second point: Military needs are not the only need. This should be obvious from the first point, as well as from common sense. If this were true you would not be reading a blog, ever. More to the stupidity of this reasoning, the French system resorted to a lottery because it went bankrupt. The inventor had pinned all his hope for a very expensive system on military financing and that didn’t come through. So the lottery was a last-ditch attempt to find support after the military walked.

semaphore-lottery

A sad footnote to this is the French military didn’t see the Germans coming in latter wars. So I could dive into why military needs didn’t disappear, but that would be more complicated than proving there were other needs and the system just wasn’t funded properly to survive.

Third point: Anyone heard of a lighthouse? What does it do best? Functions at night and in bad weather, am I right? Fires on a hill (e.g. pyres) also work quite well at night. Or a flashlight, such as the one on your cell-phone.

Try out the Jolla phone app “Morse sender” if you want to communicate over distance at night and bad weather using Morse code. Real shortcomings of visual communication come during thick smoke (e.g. old gunpowder battles or near coal power), which leads to audio signals such as the talking drum, fog horns, bagpipes and songs or cries.

Ok, so all those three above points are false and easily disproved, tossed into the bin. Now for the harder part, the overall general conclusion in two sentences from BBC magazine:

Smoke, fire, light, flags – since time immemorial man had sought to speak over space.

What France did in the first half of the 19th Century was create the first ever system of distance communication.

Shame that the writer acknowledges fire and flags here because those are the facts we used above to disprove their own analysis (work at night, still in use). Now can we disprove “first ever system of distance communication”?

I say this is hard because I’m giving the writer benefit of the doubt. Putting myself in their shoes they obviously see a big difference between the “immemorial” methods used around the world and a brief French experiment with an expensive, unfunded militaristic system.

As hard as I try, honestly I don’t see why we should call the French system first. Consider this passage from archaeologist Charles Jones’ 1873 “Antiquities of the Southern Indians

southern-indian-smoke-signals

Note this is a low-cost and night-time resilient system that leaves no trace. Pretty damning evidence of being earlier and arguably better. We have fewer first-hand proofs from earlier yet it would be easy to argue there were complex fire signals as far back as 150 BCE.

The Greek historian Polybius explained in The Histories that fire signals were used to convey complex messages over distance via cipher. A flame would be raised and lowered, turned on or off, to signal column and row of a letter.

6 The most recent method, devised by Cleoxenus and Democleitus and perfected by myself, is quite definite and capable of dispatching with accuracy every kind of urgent messages, but in practice it requires care and exact attention. 7 It is as follows: We take the alphabet and divide it into five parts, each consisting of five letters. There is one letter less in the last division, but this makes no practical difference. 8 Each of the two parties who are about signal to each other must now get ready five p215tablets and write one division of the alphabet on each tablet, and then come to an agreement that the man who is going to signal is in the first place to raise two torches and wait until the other replies by doing the same. 10 This is for the purpose of conveying to each other that they are both at attention. 11 These torches having been lowered the dispatcher of the message will now raise the first set of torches on the left side indicating which tablet is to be consulted, i.e. one torch if it is the first, two if it is the second, and so on. 12 Next he will raise the second set on the right on the same principle to indicate what letter of the tablet the receiver should write down.

It even works at night and in bad weather!

Speaking of which there may even have been a system earlier, such as 247 BCE. Given the engineering marvel of the lighthouse Pharos of Alexandria, someone may know better of its use for long-distance communication by line-of-sight.

Has the point been made that the first ever system of distance communication was not the French during their revolution?

I think the real conclusion here, in consideration of BBC magazine’s attempt to persuade us, is someone was digging for reasons to be proud of French militarism. Had they bothered to think more deeply or seek more global sources of data they might have avoided releasing such a disappointing article.

When native Americans demonstrated excellent long distance communication systems, European settlers mocked them. Yet the French build one and suddenly we’re supposed to remember it and say…oh la la? No thanks, too hard to swallow. That’s poor analysis of facts.

The German New Year’s Eve Terror Alerts

On the one hand we have RT telling us credible predictions of threats to safety were based on a tip from foreign intelligence services

“We received names,” [Munich police chief Hubertus] Andrae said. “We can’t say if they are in Munich or in fact in Germany.”

“At this point, we don’t know if these names are correct, if these people even exist, or where they might be. If we knew this, we would be a clear step further,” he added.

According to the Turkish security agency, the wider European strategy by the five individuals included churches and the sites of mass gatherings.

This led to travel warnings for people to avoid train stations, such as this one:

munichNYE2016

On the other hand, did the predicted events happen? Consider a BBC story reflecting back on New Year’s Eve in Germany, which does not seem to be put in context of any advance warnings.

The scale of the attacks on women at the city’s central railway station has shocked Germany. About 1,000 drunk and aggressive young men were involved.

City police chief Wolfgang Albers called it “a completely new dimension of crime”. The men were of Arab or North African appearance, he said.

Women were also targeted in Hamburg.

But the Cologne assaults – near the city’s iconic cathedral – were the most serious, German media report. At least one woman was raped, and many were groped.

Most of the crimes reported to police were robberies. A volunteer policewoman was among those sexually molested.

[…]

What is particularly disturbing is that the attacks appear to have been organised. Around 1,000 young men arrived in large groups, seemingly with the specific intention of carrying out attacks on women.

The problem with these stories side-by-side is twofold. First, increased police vigilance at train stations across Germany was the defensive plan against people experiencing terror, yet we’re being told now these attacks happened without notice. Violence against women at scale deserve real-time detection and response. Are authorities capable?

Second, is there clarity on what constitutes “organized” attacks? As we learn more, puzzle pieces of conspiracy are being placed on the table: “there had been reports of similar attacks on New Year’s Eve in other cities such as Hamburg and Stuttgart, although not on as massive a scale”.

I have not yet seen anyone report events in this light. The BBC report holds out the train station as a scene of terror without any mention of prior warnings, and without the police warning locations were still unknown: “We can’t say if they are in Munich or in fact in Germany”.

The looming dilemma is whether we now can say planned terror attacks happened in Germany on New Year’s Eve. As time goes on the number of women coming forward has been increasing to report assault. Why would or we say this was not a terror attack, especially as women soon after said they now fear being in public places? If we call it terror, some will complain of a slide towards loss of rights. If we don’t call it terror, some will complain of ignoring rights.