The DPRK Humanitarian Crisis

In private circles I was agitating for a while on the humanitarian crisis in North Korea. Although I have collected a bit of data and insights over the years it just hasn’t seemed like the sort of thing people were interested in or asking about. Not exactly good conversation material.

Then earlier this year I was at Bletchley Park and reading about Alan Turing. A quote of his prompted me to post my thoughts here on North Korea’s humanitarian crisis. Turing said basically (paraphrasing)

I helped my country defeat the Nazis, who used chemical castration to torture people including gays and Jews. In 1952 my country wants to give me the same treatment as a form of “managing” gays.

Turing’s life story was not well known until long after he died. And as we learn more about his tragic end it turns out despite exceptional service to his country he was horribly misunderstood and mistreated. He fought to preserve dignity against spurious charges; his social life and personal preferences caused him much trouble with British authorities. Turing was under constant surveillance and driven into horrible despair. After suffering effects of chemical castration, required by a court order, he committed suicide.

I’ll write more about the Turing incident on another post. Suffice it to say here that in the 1950s there was an intense fear-mongering climate against “gay communist” England. Thousands of men were sent to prison or chemically castrated without any reasonable cause.

Between 1945 and 1955 the number of annual prosecutions for homosexual behaviour rose from 800 to 2,500, of whom 1,000 received custodial sentences. Wolfenden found that in 1955 30% of those prosecuted were imprisoned.

The English enacted horrible treatment, even torture, to gay men for what end exactly? Turing was baffled at being arrested for “gross indecency” not least of all because just ten years earlier he had helped his country fight to protect people against such treatment. A gruesome early death was predictable for those monitored and questioned by police, even without charges.

The reform activist Antony Grey quotes the case of police enquiries in Evesham in 1956, which were followed by one man gassing himself, one throwing himself under a train, leaving widow and children, and an 81 year old dying of a stroke before sentence could be passed.

Why do I mention this? Think about the heavily politicized reports written by Mandiant or Crowdstrike. We see China, Russia and Iran accused of terrible things as if we only should look elsewhere for harm. If you are working for one of these companies today and do not think it possible things you care about abroad could happen at home, this post is for you. I recommend you consider how Turing felt betrayed by the country he helped defend.

Given Turing’s suffering can we think more universally, more forward? Wouldn’t that serve to improve moral high-ground and justifications for our actions?

Americans looking at North Korea often say they are shocked and saddened about treatment of prisoners there. I’ll give a quick example. Years ago in a Palo Alto, California a colleague recommended a book he had just finished. He said it proved without a doubt how horrible communism fails and causes starvation, unlike our capitalism that brings joy and abundance. The obvious touch of naive free-market fervor was bleeding through so I questioned whether we should trust single-source defection stories. I asked how we might verify such data when access was closed.

I ran straight into the shock and disgust of someone as if I were excusing torture, or justifying famine. How dare I question accusations about communism, the root of evil? How dare I doubt the testimony of an escapee who suffered so much to bring us truth about immorality behind closed doors? Clearly I did not understand free market superiority, which this book was really about. Our good must triumph over their evil. Did I not see the obviously worst type of government in the world? The conversation clouded quickly with him reiterating confidence in market theory and me causing grief by asking if that survivor story was sound or complete on its own.

More recent news fortunately has brought a more balanced story than the material we discussed back then. It has become easier to discuss humanitarian crisis at a logical level since more data is available with more opportunity for analysis. Even so, the Associated Press points out that despite thousands of testimonials we still have an incomplete picture from North Korea and no hard estimates.

The main source of information about the prison camps and the conditions inside is the nearly 25,000 defectors living in South Korea, the majority of whom arrived over the last five years. Researchers admit their picture is incomplete at best, and there is reason for some caution when assessing defector accounts.

I noticed the core of the problem when watching Camp 14. This is a movie that uses first-person testimony from a camp survivor to give insights into conditions of North Korea. Testimony is presented as proof of one thing: the most awful death camps imaginable. Camp workers are also interviewed to back up the protagonist story. However a cautious observer would also notice the survivor’s view has notable gaps and questionable basis.

The survivor, who was born in the prison, says he became enraged with jealously when he discovered his mother helping his older brother. He turned in his own mother to camp authorities. That is horrible in and of itself but he goes on to say he thought he could negotiate better treatment for himself by undermining his family. Later he wonders in front of the camera whether as a young boy he might have mis-heard or mis-understood his mother; wonders if he sent his own mother to be executed in front of him without reason other than to improve his own situation.

The survivor also says one day much later he started talking to a prisoner who came from the outside, a place that sounded like a better world. The survivor plots an escape with this prisoner. The prisoner from the outside then is electrocuted upon touching the perimeter fence; the survivor climbs over the prisoner’s body, using it as insulation to free himself.

These are just a couple examples (role of his father is another, old man who rehabilitated him is another) that jumped out at me as informational in a different way than perhaps was intended. This is a survivor who describes manipulation for his own gain at the expense of others, while others in his story seem to be helping each other and working towards overall gains.

I’ve watched a lot of survivor story videos and met in person with prison camp survivors. Camp 14 did not in any way sound like trustworthy testimony. I gave it benefit of the doubt while wondering if we would hear stories of the others, those who were not just opportunists. My concern was this survivor comes across like a trickster who knows how to wiggle for self-benefit regardless of harm or disrespect to those around. Would we really treat this story as our best evidence?

The answer came when major elements of his stories appeared to have been formally disputed. He quickly said others were the ones making up their stories; he then stepped away from the light.

CNN has not been able to reach Shin, who noted in a Facebook post apologizing for the inaccuracies in his story that “these will be my final words and this will likely be my final post.”

My concern is that outsiders looking for evidence of evil in North Korea will wave hands over facts and try to claim exceptional circumstances. It may be exceptional yet without caution someone could quickly make false assumptions about the cause of suffering and act on false pretense, actually increasing the problem or causing worse outcomes. The complicated nature of the problem deserves more scrutiny than easy vilification based on stale reports from those in a position to gain the most.

One example of how this plays out was seen in a NYT story about North Korean soldiers attacking Chinese along border towns. A reporter suggested soldiers today are desperate for food because of a famine 20 years ago. The story simply did not add up as told. Everything in the story suggested to me the attackers wanted status items, such as cash and technology. Certain types of food also may carry status but the story did not really seem to be about food to relieve famine, to compensate for communist market failure.

Thinking back to Turing, how do we develop a logical framework let alone a universal one, to frame ethical issues around intervention against North Korea? Are we starting with the right assumptions as well as keeping an open mind on solutions?

While we can dig for details to shame North Korea for its prison culture we also must consider the International Centre for Prison Studies ranks the United States second only to the Seychelles in per-capita incarceration rate (North Korea is not listed). According to 2012 data almost 1% of all US citizens are in prison. Americans should think about what prison quantitative analysis shows, such as here:

incarceration_rates

There also are awful qualitative accounts from inside the prisons, such as the sickening Miami testimony by a former worker about killing prisoners through torture, and prisoner convictions turning up to have zero integrity.

Human Rights Watch asked “How Different are US Prisons” given that federal judges have called them a “culture of sadistic and malicious violence”. Someone even wrote a post claiming half of the world’s worst prisons are in the US (again, North Korea is not listed).

And new studies tell us American county jails are run as debtor prisons; full of people guilty of very minor crimes yet kept behind bars by court-created debt.

Those issues are not lost to me as I read the UN Report of the Commission of Inquiry on Human Rights in the Democratic People’s Republic of Korea. Hundreds of pages give detailed documentation of widespread humanitarian suffering.

Maintaining a humanitarian approach, a universal theory of justice, seems like a good way to keep ourselves grounded as we wade into understanding crisis. To avoid the Turing disaster we must keep in mind where we are coming from as well as where we want others to go.

Take for example new evidence from a system where police arrest people for minor infractions and hold them in fear and against their will, in poor conditions without representation. I’ll let you guess where such a system exists right now:

They are kept in overcrowded cells; they are denied toothbrushes, toothpaste, and soap; they are subjected to the constant stench of excrement and refuse in their congested cells; they are surrounded by walls smeared with mucus and blood; they are kept in the same clothes for days and weeks without access to laundry or clean underwear; they step on top of other inmates, whose bodies cover nearly the entire uncleaned cell floor, in order to access a single shared toilet that the city does not clean; they develop untreated illnesses and infections in open wounds that spread to other inmates; they endure days and weeks without being allowed to use the moldy shower; their filthy bodies huddle in cold temperatures with a single thin blanket even as they beg guards for warm blankets; they are not given adequate hygiene products for menstruation; they are routinely denied vital medical care and prescription medication, even when their families beg to be allowed to bring medication to the jail; they are provided food so insufficient and lacking in nutrition that inmates lose significant amounts of weight; they suffer from dehydration out of fear of drinking foul-smelling water that comes from an apparatus on top of the toilet; and they must listen to the screams of other inmates languishing from unattended medical issues as they sit in their cells without access to books, legal materials, television, or natural light. Perhaps worst of all, they do not know when they will be allowed to leave.

And in case that example is too fresh, too recent with too little known, here is a well researched look at events sixty years ago:

…our research confirms that many victims of terror lynchings were murdered with out being accused of any crime; they were killed for minor social transgressions or for demanding basic rights and fair treatment.
[…]
…in all of the subject states, we observed that there is an astonishing absence of any effort to acknowledge, discuss, or address lynching. Many of the communities where lynchings took place have gone to great lengths to erect markers and monuments that memorialize the Civil War, the Confederacy, and historical events during which local power was violently reclaimed by white Southerners. These communities celebrate and honor the architects of racial subordination and political leaders known for their belief in white supremacy. There are very few monuments or memorials that address the history and legacy of lynching in particular or the struggle for racial equality more generally. Most communities do not actively or visibly recognize how their race relations were shaped by terror lynching.
[…]
That the death penalty’s roots are sunk deep in the legacy of lynching is evidenced by the fact that public executions to mollify the mob continued after the practice was legally banned.

The cultural relativity issues of our conflict with North Korea are something I really haven’t seen anyone talking about anywhere, although it seems like something that needs attention. Maybe I just am not in the right circles.

Perhaps I can put it in terms of a slightly less serious topic.

I often see people mocking North Korea for a lack of power and for living in the dark. Meanwhile I never see people connect lack of power to a June 1952 American bombing campaign that knocked out 90% of North Korea’s power infrastructure. This is not to say bomb attacks from sixty years ago and modern fears of dependency on infrastructure are directly related. It is far more complex.

However it stands to reason that a country in fear of infrastructure attacks will encourage resiliency and their culture shifts accordingly. A selfish dictator may also encourage resiliency to hoard power, greatly complicating analysis. Still I think Americans may over-estimate the future for past models of inefficiencies and dependency on centralized power. North Korea, or Cuba for that matter, could end up being global leaders as they figure out new decentralized and more sustainable infrastructure systems.

Sixty years ago the Las Vegas strip glare and consumption would be a marvel of technology, a show of great power. Today it seems more like an extravagant waste, an annoyance just preventing us from studying the far more beautiful night sky full of stars that need no power.

Does this future sound too Amish? Or are you one of the people ranking the night sky photos so highly that they reach most popular status on all the social sites? Here’s a typical 98.4% “pulse” photo on 500px:

nightlake-hipydeus
Night at the Lake by hipydeus

Imagine what Google Glass enhanced for night-vision would be like as a new model. Imagine the things we would see if we reversed from street lights everywhere, shifting away from cables to power plants, and went towards a more generally sustainable/resilient goal of localized power and night vision. Imagine driving without the distraction of headlights at night, with an ability to see anyway, as military drivers around the world have been trained…

I’ll leave it at that for now. So there you have a few thoughts on humanitarian crisis, not entirely complete, spurred by a comment by Turing. As I said earlier, if you are working at Mandiant or Crowdstrike please think carefully about his story. Thanks for reading.

A Remote Threat: The White House Drone Incident

Have you heard the story about a drone that crashed into the White House yard?

Wired has done a follow-up story, drawing from a conference to discuss drone risks.

The conference was open to civilians, but explicitly closed to the press. One attendee described it as an eye-opener.

Laughably Wired seems to quote just one anonymous attendee, perhaps as payback for lack of access to attend the event. Who was this sole voice and why leave them anonymous? What made it an eye-opener?

In my conference talks for the past few years I explicitly mentioned attacks on auto-auto (self-driving cars) based on our fiddling with drones.

Perhaps we are not getting much attention, despite doing our best to open eyes. Instead of some really scary stuff the Wired perspective looks only at a very limited and tired example.

But the most striking visual aid was on an exhibit table outside the auditorium, where a buffet of low-cost drones had been converted into simulated flying bombs. One quadcopter, strapped to 3 pounds of inert explosive, was a DJI Phantom 2, a newer version of the very drone that would land at the White House the next week.

Surely a flying bomb is not the most striking (pun intended?) visual aid. I would be happy to give any journalist multiple reasons why a kamikaze does not present the most difficult problem to solve.

On the scale of things I would want to build defenses against, their most striking example seems already within reach. There are far more interesting ones, which is why I have been giving presentations on the risks and what defenders could do to about them (Blackhat, CONFidence).

We also have tweeted about taking over the skyjack drone by manipulating its attack script flaw, essentially a mistake in radio logic. A drone on autopilot using a mapped GPS would be straightforward to defeat, which we also have had some fun discussions about, at least in terms of ships (flying in water, get it?). And then there is Lidar jamming…

Anyway back in April of 2014 I had tweeted about DJI drone controls and no waypoint zones. The drone company was expressing a clear global need to steer clear of airports. Thought I should call attention to our 2014 research and this detail as soon as I saw the White House news so I replied to some tweets.

dtweet6

Nine retweets!? Either I was having a good day or the White House raises people’s attention level. Maybe we can blow off all our talking about this in the past because someone just flew a drone into the wrong yard. It’s a whole new ballgame of awareness. While the White House drone incident could cause a backlash on drone manufacturers for lack of zone controls, the incident also brings a much needed inflection point at the highest and broadest levels, which is long overdue.

Our culture tends to leave the market to harm the average person because let them figure it out. Once a top-dog, a celebrity with everything, is harmed or threatened then things get real. It is like we say “if they can’t defend, no one could” and so the regulatory wheels start to spin.

An incident with zero impact that can raise awareness sounds great to me. As I explained to a FCC Commissioner last year, American regulation is driven by celebrity events. This one was pretty close and may get us some good discussion points. That is why I see this incident finally bringing together at least three phases of drone enthusiast. Fresh and new people will be stepping into the ring to tinker and offer innovative solutions; old military and scientific establishment folks (albeit with some VIP nonsense and closed-door snafus) will come out of the woodwork to ask for a place in the market; and of course those who have been fiddling away for a while without much notice will take a deep-breath, write a blog post and wonder who will read it this time.

Three drone enthusiast profiles

Last year I sauntered into a night-time drone meetup in San Francisco. It was sponsored by a high-powered east-coast pseudo-governance organization. And when I say drone meetup, I am not just talking about the lobbyist drone in fancy clothes who talked about bringing “community” closer to the defense industry “shared-objectives” (“you are getting very sleepy”). I am talking about a room stuffed with people interested in pushing technology boundaries, mostly in the air. Several observations about that meetup I would like to share here. Roughly speaking I found the audience fit into these interest levels:

  • Profile 1: The hobbyist Easily annoyed by thinking about risks, the hobbyist is typical attendee in technology meetups. Some people look at the clouds above the picnic, some look at the ants. These new technology meetups almost always are filled with cloud watchers who don’t want to worry. Hobbyists would ask “what do you pilot”. I would reply “Sorry, not here to pilot, I study how to remove drones, drop them from the air”. This went over like a lead balloon. You could sense the deflation in mood. When asked “why would anyone want to do that” my response was “Nothing concrete yet. So many possible reasons a drone could be a threat.” Rather than why, I want to know how and I told people “when the day comes someone needs a drone stopped, I would like to avoid panic about how.” Hobbyists have amazing ideas about drones changing the world for the better; someone needs to ask them “why” and “is that safe” at strategic points in the conversation.
  • Profile 2: The professional/pilot Swapping stories about success and failures, this group was jaded by reality. A gold-mine of lessons not widely shared was available to those willing to ask. A favorite story was from someone who built gas-leak sensor drones “too accurate” to be used. A power company (PG&E) was forced to admit their sensors (mostly manual, staff in vehicles) were dangerously out-dated and wrong. The quality gap opened was so large PG&E became angry and tried to kill his drone program. Another great story was mines laser-mapped by drones. Software stitched together drone photos and maps, using cloud compute clusters, then enhanced with environmental details. New business models were being explored because drones could inexpensively create replica worlds; gaming companies and architects were target markets. Want to see how an underground restaurant concept looks at 5:30PM as the sun sets, or with a morning rain-storm? Click, click you can walk through virtual reality courtesy of drones. Another story was pure surveillance, although told as “tourism”. Go to a famous monument, pull out your pocket drone, launch it and quickly take a few thousand pictures; now you have a perfect 3D model. Statues, machines, buildings…the drone comes back with data you download to process into a perfect model of anything the drone can “see” on its little vacation. Since this story was told last year I have to also point out newer drones are faster; process data in real-time as they fly instead of after a download.
  • Profile 3: The lobbyist The lobbyist bridges reality of risks with promise of new sales. There is some belief that the military is light-years ahead of hobbyists and professionals in drone-building and flying. Been there done that, a business model (selling to the government) was solid and their engineers want to rule the technology leadership roost into the next business models. However they also openly admit the military-industrial-complex has become so used to handouts they fear missing the boat on consumer desire. A flood of new talent was scooping up drone kits and toys, which looked like it could dwarf the military-industrial market. Thus synergies and collaborations are hoped to license military tech to professionals, who will tell the stories that hobbyists get excited about.

You could smell a three-way collision (at least, maybe more) brewing and bubbling. Yet the three groups stood apart as distinct. Political stakes were increasing: money and ideas starting to flow, old power worried about disruption, seasoned vets gave guidance on where to go with the technology and new horizons. It just didn’t seem quite yet the time for collaboration, let alone getting a security discussion going across all three groups.

Bringing Profiles Together

Going way, way back, I remember as a child when my grandfather handed me a drone he had built (mostly ruined, actually, but let’s just pretend he made it better). Having a grandfather who built drones did not seem all that special. Model trains, airplanes, boats…all that I figured to be the purview of old people fascinated with making big technology smaller so they could play with it. Kind of like the bonzai thing, I guess, where you think it’s something everyone would do when in fact very few can keep the damn thing alive.

Fast-forward to today and I realize my grandfather’s peculiar interest in drones might have been a little exceptional. Today groups everywhere are growing consistently larger and newly discovering use for drones. If I drew a Venn diagram the circles would seem separate and distinct from each other; where drones simply are not part of everyday life yet, unlike technology such as glasses (the better to see you with). Roombas aside, my theory is the future looks incredibly bright if people can start thinking together about ethics and politics in the bigger drone picture, including risks.

Speaking of going back in time to understand the future, in 2013 I found my long-time drone interests leading into tweets useful at work for an infrastructure/operations giant. Could this be a model of convergence? I thought Twitter might help with converging risk discussions into after-hours meetups, like talking about the forward-thinking people in Iowa demanding no-drone zones.

dtweet1

Clearly my humor did not win anyone’s attention. Not a single retweet or favorite. Crickets.

It also may just be that Twitter sucks as a platform and I have no followers. That’s probably why I’m back to blogging again more. Does anyone find Tweets conducive to real conversation? The best Twitter seems to do for me is to shift conversation by allowing me to throw a fact in here or there, like I sit quietly with my remote Twitter control, every so often dropping stones into the Twitter pond.

When a news story broke in 2013 I had to jump in and say “hey, cool Amazon hobbyist (new) story and I think you could be overlooking a FedEx lobbyist (old) story”.

dtweet2

I was poking around some loopholes too, wondering whether the drones over SF could have a get-out-of-jail card if we wanted to take them down.

dtweet3

Kudos to Sina and Jack for the conversation. My tweets were at least reaching two or three people at this point.

And as anti-drone laws were popping up I occasionally would mention my research in public. Alaska wanted a law to make sure hunters could not use drones for unfair advantage.

Such a rule seemed ironic, considering how guns have made killing a “sport” nearly anyone can “play”. A completely unbalanced and technology-laden air/ground/sea attack strategy on nature was common talk, at least when I was in Alaska. Anyway someone thought drones were taking an already automated sport of killing too far.

Illinois took the opposite approach to Alaska. Someone saw drones as potential interference to those out for a killing.

dtweet4

By April of 2014 I had built up a fair amount of detail on no-fly zones and strategies. We ran drones for testing and anti-drone antenna prototypes were being discussed. I gave myself a challenge: get a talk accepted and then publish an anti-drone device, similar to anti-aircraft, for the hobbyist or average home user.

Here’s a good picture of where I was coming from on this idea. One of the top drone manufacturers told me their drones were absolutely not going to stray into no-fly zones. What if they did anyway? Ethics were easy in this space of unauthorized entry. A system to respond seemed most clearly justified and desired.

dtweet5

Haha. “No-way points” get it? No? That’s ok, no one did. Not a single re-tweet or favorite for that map.

The point wasn’t completely lost on people, however. A little exposure meant I was called in for a short Loopcast episode, called Drone Hacking, which I suppose some people might have heard. The counter says 162,000 plays so far, which seems impossible to me. Maybe some of those numbers are from drones?

Anyway my big plan to release our research at a conference was knocked down when the Infiltrate popular voting system denied us a spot. We were going to show how we immediately, and I mean immediately, found a way to skyjack the Skyjack drones; it was a talk about general command and control strategy, redirection, ground-to-air, air-to-air and all kinds of fun stuff.

Denied by peers.

I resubmitted the same ideas to CanSecWest.

Denied by review board.

This pretty-much shelved my excitement to explain more details, e.g. those obvious bugs in SkyJack code (not picked up by any news but at least credited by Samy when we reported it to him), and why insecure WiFi and services leave options for self-defense wide open:

drone-telnet

Kind of obvious what’s wrong here; security is most sensational when addressed at a low-level using basic stuff. Begs the question of how and when exactly to cross-over the discussion from infosec/safety flaws into hobby or even professional forums. I confess I made a mistake. My focus has been more on what to do about larger picture issues, because I argue the individual sensor flaws go without saying.

Yet I have to face reality that the “flaws” audience, the people looking for ants, still may be the only place to talk about dropping drones out of the sky. Others will dismiss the topic until a serious, celebrity or White House level, event occurs. In my mind that is too late…

At last year’s EMCworld a guy on my staff was fully dedicated to drone safety tests — he was achieving real pilot skills by the time we ran public demos — still our safety research wasn’t detailed by any news source. Timing felt early, as if journalists were apprehensive to the story and the groups mentioned above too separate to generate a nice broad general audience piece.

So while the conference was explicitly open to the press we had the opposite of major celebrity-level disaster (we we told not to crash a hobby drone into the crowd, despite it raising our chances for attention). Our 30,000 person infrastructure/operations audience seemed to lack interest in any presentation on responses to evil drones. An attempt to cross-over just turned into people asking if they could take home the drones as a conference prize. Thus we auctioned our four test units at the end of the show and management patted us on the back for quiet success. Ooops. Maybe we can do better getting the right kind of attention this year before real damage is done.

Beware the Sony Errorists

A BBC business story on the Sony breach flew across my screen today. Normally I would read through and be on my way. This story however was so riddled with strange and simple errors I had to stop and wonder; who really reads this without pause? Perhaps we need a Snopes of theories on hackers.

A few examples

Government-backed attackers have far greater resources at their disposal than criminal hacker gangs…

False. Criminal hacker gangs can amass far greater resources more quickly than government-backed ones. Consider how criminal gangs operate relative to the restrictions of the “governed”. Government-backed groups have constraints on budget, accountability, jurisdiction…. I am reminded of the Secret Service agent who told me how he had to scrape and toil for months to bring together an international team with resources and approval. Finally getting approval his group descended in a helicopter onto the helipad of a criminal property that was literally a massive gilded castle surrounded by exotic animals and vehicles. Gov agencies were outclassed on almost every level yet careful planning, working together and correct timing were on their side. The bust was successful despite strained resources across several countries.

Of course it is easy to find opposite examples. The government invests in the best equipment to prepare for some events and clearly we see “defense” budgets swell. This is not the point. In many scenarios of emerging technology you find innovation and resources are handled better by criminal gangs who lack constraints of being governed — criminals can be as lavish or unreasonable as they decide. Have you noticed anyone lately talking about how Apple or Google have more money than Russia?

Government-backed hackers simply won’t give up…

False. This should be self-evident from the answer above. Limited resources and even regime change are some of the obvious reasons why government-backed anything will give up. In the context of North Korea, let alone wider history of conflict, we only have to look at a definition of the current armistice that is in place: “formal agreement of warring parties to stop fighting”.

Two government-backed sides in Korea formally “gave up” and signed an armistice agreement July 27, 1953. At 10 a.m..

Perhaps some will not like this example because North Korea is notorious for nullifying the armistice as a negotiation tactic. Constant reminders of its intent for reunification seem like it has refused to give up. I’d disagree, on the principle of what armistice means. Even so let’s consider instead the U.S. role in Vietnam. On January 27, 1973 an “Ending the War and Restoring Peace in Viet-Nam” Agreement was signed by the U.S. and others in conflict; by the end of 1973 the U.S. had unquestionably given up attacks and three years later North and South were united.

I also am tempted to point to famous pirates (Ching Shih or Peter Easton) who “gave up” after a career of being sponsored by various states to attack others. They simply inked a deal with one sponsor to retire.

“What you need is a bulkhead approach like in a ship: if the hull gets breached you can close the bulkhead and limit the damage…

True with Serious Warning. To put it simply, bulkheads are a tool, not a complete solution. This is the exact mistake that led to the Titanic disaster. A series of bulkheads (with some fancy new technology hand-waving of the time) were meant to keep the ship safe even when breached. This led people to refer to the design as “unsinkable”. So if the Titanic sank how can bulkheads still be a thing to praise?

I covered this in my keynote presentation for the 2010 RSA Conference in London. Actually, I wasn’t expecting to be a keynote and packed my talk with details. Then I found myself on main stage, speaking right after Richard Clarke, which made it awkward to fit in my usual pace of delivery. Anyway, here’s a key slide of the keynote.

B8-T_gPIQAEKjaw

The bulkheads gave a false sense of confidence, allowing a greater disaster to unfold for a combination of reasons. See how “wireless issues” and “warnings ignored” and “continuing to sail” and “open on top” start to add up? In other words if you hit something and detect a leak you tend to make an earlier assessment and a more complete one — one that affects the whole ship. If you instead think “we’ve got bulkheads keep going” a leak that could be repaired or slowed turns very abruptly into a terminal event, a sinking.

Clearly Sony had been breached in one of their bulkheads already. We saw the Playstation breach in 2011 have dramatic and devastating impact. Sony kept sailing, probably with warnings ignored elsewhere, communications issues, and thinking improvements in one bulkhead area of the company was sufficient. Another breach devastated them in 2013 and they continued along…so perhaps you can see how bulkheads are a tool that offer great promise yet require particular management to be effective. Bulkheads all by themselves are not a “need”. Like a knife, or any other tool that makes defense easier, what people “need” is to learn how to use them properly — keep the pointy side in the right direction.

Another way of looking at the problem

The rest of the article runs through a mix of several theories.

One theory mentioned is to delete data to avoid breaches. This is good specific advice, not good general advice. If we were talking about running out of storage room people may look at deletion as a justified option. If the data is not necessary to keep and carries a clear risk (e.g. post-authorization payment card data fines) then there is a case to be made. And in the case of regulation then the data to be deleted is well-defined. Otherwise deleting poorly-defined data actually can make things worse through rebellion.

A company tells its staff that the servers will be purging data and you know what happens next? Staff start squirreling away data on every removable storage devices and cloud provider they can find because they still see that data as valuable, necessary to be successful, and there’s no real penalty for them. Moreover, telling everyone to delete email that may incriminate is awkward strategy advice (e.g. someone keeps a copy and you delete yours, leaving you without anything to dispute their copy with). Also it may be impossible to ask this of environments where data is treated as a formal and permanent record. People in isolation could delete too much or the wrong stuff, discovered too late by upper management. Does that risk outweigh the unknown potential of breach? Pushing a risk decision away from the captain of a ship and into bulkheads without good communication can lead to Titanic miscalculations.

Another theory offered is to encrypt and manage keys perfectly. Setting aside perfect anything management, encryption is seriously challenged by an imposter event like Sony. A person inside an environment can grab keys. Once they have the keys they have to be stopped by a requiring other factors of identification. Asking the imposter to provide something they have or something they are is where the discussion often will go — stronger authentication controls both to prevent attacks spreading and also to help alert management to a breach in progress. Achieving this tends to require better granularity in data (fewer bulkheads) and also more of it (fewer deletions). The BBC correctly pointed out that there is balance yet by this point the article is such a mess they could say anything in conclusion.

What I am saying here is think carefully about threats and solutions if you want to manage them. Do not settle on glib statements that get repeated without much thought or explanation, let alone evidence. Containment can work against you if you do not manage it well, adding cost and making a small breach into a terminal event. A boat obviously will use any number of technologies, new and old, to keep things dry and productive inside. A lot of what is needed relates to common sense about looking and listening for feedback. This is not to say you need some super guru as captain; rather it is the opposite. You need people committed to improvement, to reporting things are not as they should be, in order to achieve a well-run ship.

Those are just some quick examples above and how I would position things differently. Nation-states are not always in a better position. Often they are hindered. Attackers have weaknesses and commitments. Finding a way to make them stop is not impossible. And ultimately, throwing around analogies is GREAT as long as they are not incomplete or applied incorrectly. Hope that helps clarify how to use a little common sense to avoid errors being made in journalist stories on the Sony breach.

Gov Fumbles Over-Inflated Sony Hack Attribution Ball

This (draft) post basically comes after reading one called “The Feds Got the Sony Hack Right, But the Way They’re Framing It Is Dangerous” by Robert Lee. Lee stated:

At its core, the debate comes down to this: Should we trust the government and its evidence or not? But I believe there is another view that has not been widely represented. Those who trust the government, but disagree with the precedent being set.

Lee is not the only person in government referring to this core for the debate. It smacks of being forced by those in government to choose one side or the other, for or against them. Such a binary depiction of governance, such a call for obedience, is highly politically charged. Do not accept it.

I will offer two concepts to help with the issue of choosing a path.

  1. Trust but Verify (As Gorbachev Used to Tell Reagan)
  2. Agile and Social/Pair Development Methods

So here is a classic problem: non-existent threats get over inflated because of secret forums and debates. Bogus reports and false pretense could very well be accidents, to be quickly corrected, or they may be intentionally used to justify policies and budgets requiring more concerted protest.

If you know the spectrum are you actually helping improve trust in government overall by working with them to eliminate error or correct bias? How does trusting government and its evidence while also wanting to also improve government fit into the sides Lee quotes? It seems far more complicated than writing off skeptics as distrustful of government. It also has been proven that skeptics help preserve trust in government.

Take a moment to look back at a false attribution blow-up of 2011:

Mimlitz says last June, he and his family were on vacation in Russia when someone from Curran Gardner called his cell phone seeking advice on a matter and asked Mimlitz to remotely examine some data-history charts stored on the SCADA computer.

Mimlitz, who didn’t mention to Curran Gardner that he was on vacation in Russia, used his credentials to remotely log in to the system and check the data. He also logged in during a layover in Germany, using his mobile phone. …five months later, when a water pump failed, that Russian IP address became the lead character in a 21st-century version of a Red Scare movie.

Everything deflated after the report was investigated due to public attention. Given the political finger-pointing that came out afterwards it is doubtful that incident could have received appropriate attention in secret meetings. In fact, much of the reform of agencies and how they handle investigations comes as a result of public criticism of results.

Are external skepticism and interest/pressure the key to improving trust in government? Will we achieve more accurate analysis through more parallel and open computations? The “Big Data” community says yes. More broadly speaking so many have emulated the Aktenzeichen XY … ungelöst “help police solve crimes” TV show since it started in 1967, a general population also probably would agree.

Trust but Verify

British Prime Minister Margaret Thatcher famously once quipped “Standing in the middle of the road is very dangerous; you get knocked down by the traffic from both sides.” Some might take this to mean it is smarter to go with the flow. As Lee highlighted, they say pick a side either for trust in government or against. Actually, it often turns out to be smarter to reject this analogy.

Imagine flying a plane. Which “side” do you fly on when you see other planes flying in no particular direction? Thatcher was renowned for false choice risk-management, a road with only two directions where everyone chooses sides without exceptions. She was adamantly opposed to Gorbachev tearing down the wall, for example, because it did not fit her over-simplified risk management theory. Verification of safety is so primitive in her analogy as to be worthless to real-world management.

Asking for verification should be a celebration of government and trust. We trust our government so much, we do not fear to question its authority. Auditors, for example, look for errors or inconsistencies in companies without being seen as a threat to trust in those companies. Executives further strengthen trust through skepticism and inquiry.

Consider for a moment an APT (really, no pun intended) study called “Decisive action: How businesses make decisions and how they could do it better“. It asked “when taking a decision, if the available data contradicted your gut feeling, what would you do?”

APT-doubt

Releasing incomplete data could be reasonably expected to have 90% push back for more data or more analysis, according to this study. Those listening to the FBI claim North Korea is responsible probably have a gut feeling contradicting the data. That gut feeling is more “are we supposed to accept incomplete data as proof of something, because been there done that, let’s keep going” than it is “we do not trust you”.

In the same study 38% said decisions are better when more people are involved, and 38% said more people did not help, so quantity alone isn’t the route to better outcomes. Quality remains a factor, so there has to be a reasonable bar to input, as we have found in Big Data environments. The remaining 25% in the survey could tip the scale on this point, yet they said they were still collecting and reanalyzing data.

My argument here is you can trust and you still can verify. In fact, you should verify where you want to maintain or enhance trust in leadership. Experts definitely should not be blandly labelled as anti-government (the 3% who ignore) when they ask for more data or do reanalysis (the 90% who want to improve decision-making).

Perhaps Mitch Hedberg put it best:

I bought a doughnut and they gave me a receipt for the doughnut. I don’t need a receipt for a doughnut. I just give you the money, you give me the doughnut. End of transaction. We don’t need to bring ink and paper into this. I just can not imagine a scenario where I had to prove I bought a doughnut. Some skeptical friend. Don’t even act like I didn’t get that doughnut. I got the documentation right here. Oh, wait it’s back home in the file. Under D.

We have many doughnut scenarios with government. Decisions are easy. Pick a doughnut, eat it. At least 10% of the time we may even eat a doughnut when our gut instinct says do not because impact seems manageable. The Sony cyberattack however is complicated with potentially huge/unkown impact and where people SHOULD imagine a scenario requiring proof. It’s more likely in the 90% range where an expert simply going along with it would be exhibiting poor leadership skills.

So debate actually boils down to this: should the governed be able to call for accountability from their government without being accused of complete lack of trust? Or perhaps more broadly should the governed have the means to immediately help improve accuracy and accountability of their government, provide additional resources and skills to make their government more effective?

Agile and Social/Pair Development Methods

In the commercial world we have seen a massive shift in IT management from waterfall and staged progress (e.g. environments with rigorously separated development, test, ready, release, production) to developers frequently running operations. Security in operations has had to keep up and in some cases lead the evolution.

Given the context above, where embracing feedback-loops leads to better outcomes, isn’t government also facing the same evolutionary path? The answer seems obvious. Yes, of course government should be inviting criticism and be prepared to adapt and answer, moving development closer to operations. Criticisms could even be more manageable by nature of a process where they occur more frequently in response to smaller updates.

Back to Lee’s post, however, he suggests an incremental or shared analysis would be a path to disaster.

The government knew when it released technical evidence surrounding the attack that what it was presenting was not enough. The evidence presented so far has been lackluster at best, and by its own admission, there was additional information used to arrive at the conclusion that North Korea was responsible, that it decided to withhold. Indeed, the NSA has now acknowledged helping the FBI with its investigation, though it still unclear what exactly the nature of that help was.

But in presenting inconclusive evidence to the public to justify the attribution, the government opened the door to cross-analysis that would obviously not reach the same conclusion it had reached. It was likely done with good intention, but came off to the security community as incompetence, with a bit of pandering.

[…]

Being open with evidence does have serious consequences. But being entirely closed with evidence is a problem, too. The worst path is the middle ground though.

Lee shows us a choice based on false pretense of two sides and a middle full of risk. Put this in context of IT. Take responsibility for all the flaws and you delay code forever. Give away all responsibility for flaws and your customers go somewhere else. So you choose a reasonable release schedule that has removed major flaws while inviting feedback to iterate and improve before next release. We see software continuously shifting towards the more agile model, away from internal secret waterfalls.

Lee gives his ultimate example of danger.

This opens up scary possibilities. If Iran had reacted the same way when it’s nuclear facility was hit with the Stuxnet malware we likely would have all critiqued it. The global community would have not accepted “we did analysis but it’s classified so now we’re going to employ countermeasures” as an answer. If the attribution was wrong and there was an actual countermeasure or response to the attack then the lack of public analysis could have led to incorrect and drastic consequences. But with the precedent now set—what happens next time? In a hypothetical scenario, China, Russia, or Iran would be justified to claim that an attack against their private industry was the work of a nation-state, say that the evidence is classified, and then employ legal countermeasures. This could be used inappropriately for political posturing and goals.

Frankly this sounds NOT scary to me. It sounds par for the course in international relations. The 1953 US decision to destroy Iran’s government at the behest of UK oil investors was the scary and ill-conceived reality, as I explained in my Stuxnet talk.

One thing I repeatedly see Americans fail to realize is that the world looks in at America playing a position of strength unlike others, jumping into “incorrect and drastic consequences”. Internationally the one believed most likely to leap without support tends to be the one who perceives they have the most power, using an internal compass instead of true north.

What really is happening is those in American government, especially those in the intelligence and military communities, are trying to make sense of how to achieve a position of power for cyber conflict. Intelligence agencies seek to accumulate the most information, while those in the military contemplate definitions of winning. The two are not necessarily in alignment since some definitions of winning can have a negative impact on the ability to gather information. And so a power struggle is unfolding with test scenarios indispensable to those wanting to establish precedent and indicators.

This is why moving towards a more agile model, away from internal secret waterfalls, is a smart path. The government should be opening up to feedback, engaging the public and skeptics to find definitions in unfamiliar space. Collecting and analyzing data are becoming essential skills in IT because they are the future of navigating a world without easy Thatcher-ish “sides” defined. Lee concludes with the opposite view, which again presents binary options.

The government in the future needs to pick one path and stick to it. It either needs to realize that attribution in a case like this is important enough to risk disclosing sources and methods or it needs to realize that the sources and methods are more important and withhold attribution entirely or present it without any evidence. Trying to do both results in losses all around.

Or trying to do both could help drive a government out of the dark ages of decision-making tools. Remember the inability of a certain French General to listen to the skeptics all around him saying German invasion through the forest was imminent? Remember how that same General refused to use radio for regular updates, sticking to a plan, unlike his adversaries on their way to overtake his territory with quickly shifting paths and dynamic plans?

Bureaucracy and inefficiency leads to strange overconfidence and comfort in “sides” rather than opening up to unfamiliar agile and adaptive thinking. We should not confuse the convenience in getting everyone pointed in the same direction with true preparation and skills to avoid unnecessary losses.

The government should evolve away from tendencies to force complex scenarios into false binary choices, especially where social and pairing methods makes analysis easily improved. In the future, the best leaders will evaluate the most paths and use reliable methods to gradually reevaluate and adjust based on enhanced feedback. They will not “pick one path and stick to it” because situational awareness is more powerful and can even be more consistent with values (maintaining moral high-ground by correcting errors rather than doubling-down).

I’ve managed to avoid making any reference to football. Yet at the end of the day isn’t this all really about an American ideal of industrialization? Run a play. Evaluate. Run another play. Evaluate. America is entering a world of cyber more like soccer (the real football) that is far more fluid and dynamic. Baseball has the same problem. Even basketball has shades of industrialization with machine-like plays. A highly-structured top-down competitive system that America was built upon and that it has used for conflict dominance is facing a new game with new rules that requires more adaptability; intelligence unlocked from set paths.

Update 24 Jan: Added more original text of first quote for better context per comment by Robert Lee below.