Gov Fumbles Over-Inflated Sony Hack Attribution Ball

This (draft) post basically comes after reading one called “The Feds Got the Sony Hack Right, But the Way They’re Framing It Is Dangerous” by Robert Lee. Lee stated:

At its core, the debate comes down to this: Should we trust the government and its evidence or not? But I believe there is another view that has not been widely represented. Those who trust the government, but disagree with the precedent being set.

Lee is not the only person in government referring to this core for the debate. It smacks of being forced by those in government to choose one side or the other, for or against them. Such a binary depiction of governance, such a call for obedience, is highly politically charged. Do not accept it.

I will offer two concepts to help with the issue of choosing a path.

  1. Trust but Verify (As Gorbachev Used to Tell Reagan)
  2. Agile and Social/Pair Development Methods

So here is a classic problem: non-existent threats get over inflated because of secret forums and debates. Bogus reports and false pretense could very well be accidents, to be quickly corrected, or they may be intentionally used to justify policies and budgets requiring more concerted protest.

If you know the spectrum are you actually helping improve trust in government overall by working with them to eliminate error or correct bias? How does trusting government and its evidence while also wanting to also improve government fit into the sides Lee quotes? It seems far more complicated than writing off skeptics as distrustful of government. It also has been proven that skeptics help preserve trust in government.

Take a moment to look back at a false attribution blow-up of 2011:

Mimlitz says last June, he and his family were on vacation in Russia when someone from Curran Gardner called his cell phone seeking advice on a matter and asked Mimlitz to remotely examine some data-history charts stored on the SCADA computer.

Mimlitz, who didn’t mention to Curran Gardner that he was on vacation in Russia, used his credentials to remotely log in to the system and check the data. He also logged in during a layover in Germany, using his mobile phone. …five months later, when a water pump failed, that Russian IP address became the lead character in a 21st-century version of a Red Scare movie.

Everything deflated after the report was investigated due to public attention. Given the political finger-pointing that came out afterwards it is doubtful that incident could have received appropriate attention in secret meetings. In fact, much of the reform of agencies and how they handle investigations comes as a result of public criticism of results.

Are external skepticism and interest/pressure the key to improving trust in government? Will we achieve more accurate analysis through more parallel and open computations? The “Big Data” community says yes. More broadly speaking so many have emulated the Aktenzeichen XY … ungelöst “help police solve crimes” TV show since it started in 1967, a general population also probably would agree.

Trust but Verify

British Prime Minister Margaret Thatcher famously once quipped “Standing in the middle of the road is very dangerous; you get knocked down by the traffic from both sides.” Some might take this to mean it is smarter to go with the flow. As Lee highlighted, they say pick a side either for trust in government or against. Actually, it often turns out to be smarter to reject this analogy.

Imagine flying a plane. Which “side” do you fly on when you see other planes flying in no particular direction? Thatcher was renowned for false choice risk-management, a road with only two directions where everyone chooses sides without exceptions. She was adamantly opposed to Gorbachev tearing down the wall, for example, because it did not fit her over-simplified risk management theory. Verification of safety is so primitive in her analogy as to be worthless to real-world management.

Asking for verification should be a celebration of government and trust. We trust our government so much, we do not fear to question its authority. Auditors, for example, look for errors or inconsistencies in companies without being seen as a threat to trust in those companies. Executives further strengthen trust through skepticism and inquiry.

Consider for a moment an APT (really, no pun intended) study called “Decisive action: How businesses make decisions and how they could do it better“. It asked “when taking a decision, if the available data contradicted your gut feeling, what would you do?”

APT-doubt

Releasing incomplete data could be reasonably expected to have 90% push back for more data or more analysis, according to this study. Those listening to the FBI claim North Korea is responsible probably have a gut feeling contradicting the data. That gut feeling is more “are we supposed to accept incomplete data as proof of something, because been there done that, let’s keep going” than it is “we do not trust you”.

In the same study 38% said decisions are better when more people are involved, and 38% said more people did not help, so quantity alone isn’t the route to better outcomes. Quality remains a factor, so there has to be a reasonable bar to input, as we have found in Big Data environments. The remaining 25% in the survey could tip the scale on this point, yet they said they were still collecting and reanalyzing data.

My argument here is you can trust and you still can verify. In fact, you should verify where you want to maintain or enhance trust in leadership. Experts definitely should not be blandly labelled as anti-government (the 3% who ignore) when they ask for more data or do reanalysis (the 90% who want to improve decision-making).

Perhaps Mitch Hedberg put it best:

I bought a doughnut and they gave me a receipt for the doughnut. I don’t need a receipt for a doughnut. I just give you the money, you give me the doughnut. End of transaction. We don’t need to bring ink and paper into this. I just can not imagine a scenario where I had to prove I bought a doughnut. Some skeptical friend. Don’t even act like I didn’t get that doughnut. I got the documentation right here. Oh, wait it’s back home in the file. Under D.

We have many doughnut scenarios with government. Decisions are easy. Pick a doughnut, eat it. At least 10% of the time we may even eat a doughnut when our gut instinct says do not because impact seems manageable. The Sony cyberattack however is complicated with potentially huge/unkown impact and where people SHOULD imagine a scenario requiring proof. It’s more likely in the 90% range where an expert simply going along with it would be exhibiting poor leadership skills.

So debate actually boils down to this: should the governed be able to call for accountability from their government without being accused of complete lack of trust? Or perhaps more broadly should the governed have the means to immediately help improve accuracy and accountability of their government, provide additional resources and skills to make their government more effective?

Agile and Social/Pair Development Methods

In the commercial world we have seen a massive shift in IT management from waterfall and staged progress (e.g. environments with rigorously separated development, test, ready, release, production) to developers frequently running operations. Security in operations has had to keep up and in some cases lead the evolution.

Given the context above, where embracing feedback-loops leads to better outcomes, isn’t government also facing the same evolutionary path? The answer seems obvious. Yes, of course government should be inviting criticism and be prepared to adapt and answer, moving development closer to operations. Criticisms could even be more manageable by nature of a process where they occur more frequently in response to smaller updates.

Back to Lee’s post, however, he suggests an incremental or shared analysis would be a path to disaster.

The government knew when it released technical evidence surrounding the attack that what it was presenting was not enough. The evidence presented so far has been lackluster at best, and by its own admission, there was additional information used to arrive at the conclusion that North Korea was responsible, that it decided to withhold. Indeed, the NSA has now acknowledged helping the FBI with its investigation, though it still unclear what exactly the nature of that help was.

But in presenting inconclusive evidence to the public to justify the attribution, the government opened the door to cross-analysis that would obviously not reach the same conclusion it had reached. It was likely done with good intention, but came off to the security community as incompetence, with a bit of pandering.

[…]

Being open with evidence does have serious consequences. But being entirely closed with evidence is a problem, too. The worst path is the middle ground though.

Lee shows us a choice based on false pretense of two sides and a middle full of risk. Put this in context of IT. Take responsibility for all the flaws and you delay code forever. Give away all responsibility for flaws and your customers go somewhere else. So you choose a reasonable release schedule that has removed major flaws while inviting feedback to iterate and improve before next release. We see software continuously shifting towards the more agile model, away from internal secret waterfalls.

Lee gives his ultimate example of danger.

This opens up scary possibilities. If Iran had reacted the same way when it’s nuclear facility was hit with the Stuxnet malware we likely would have all critiqued it. The global community would have not accepted “we did analysis but it’s classified so now we’re going to employ countermeasures” as an answer. If the attribution was wrong and there was an actual countermeasure or response to the attack then the lack of public analysis could have led to incorrect and drastic consequences. But with the precedent now set—what happens next time? In a hypothetical scenario, China, Russia, or Iran would be justified to claim that an attack against their private industry was the work of a nation-state, say that the evidence is classified, and then employ legal countermeasures. This could be used inappropriately for political posturing and goals.

Frankly this sounds NOT scary to me. It sounds par for the course in international relations. The 1953 US decision to destroy Iran’s government at the behest of UK oil investors was the scary and ill-conceived reality, as I explained in my Stuxnet talk.

One thing I repeatedly see Americans fail to realize is that the world looks in at America playing a position of strength unlike others, jumping into “incorrect and drastic consequences”. Internationally the one believed most likely to leap without support tends to be the one who perceives they have the most power, using an internal compass instead of true north.

What really is happening is those in American government, especially those in the intelligence and military communities, are trying to make sense of how to achieve a position of power for cyber conflict. Intelligence agencies seek to accumulate the most information, while those in the military contemplate definitions of winning. The two are not necessarily in alignment since some definitions of winning can have a negative impact on the ability to gather information. And so a power struggle is unfolding with test scenarios indispensable to those wanting to establish precedent and indicators.

This is why moving towards a more agile model, away from internal secret waterfalls, is a smart path. The government should be opening up to feedback, engaging the public and skeptics to find definitions in unfamiliar space. Collecting and analyzing data are becoming essential skills in IT because they are the future of navigating a world without easy Thatcher-ish “sides” defined. Lee concludes with the opposite view, which again presents binary options.

The government in the future needs to pick one path and stick to it. It either needs to realize that attribution in a case like this is important enough to risk disclosing sources and methods or it needs to realize that the sources and methods are more important and withhold attribution entirely or present it without any evidence. Trying to do both results in losses all around.

Or trying to do both could help drive a government out of the dark ages of decision-making tools. Remember the inability of a certain French General to listen to the skeptics all around him saying German invasion through the forest was imminent? Remember how that same General refused to use radio for regular updates, sticking to a plan, unlike his adversaries on their way to overtake his territory with quickly shifting paths and dynamic plans?

Bureaucracy and inefficiency leads to strange overconfidence and comfort in “sides” rather than opening up to unfamiliar agile and adaptive thinking. We should not confuse the convenience in getting everyone pointed in the same direction with true preparation and skills to avoid unnecessary losses.

The government should evolve away from tendencies to force complex scenarios into false binary choices, especially where social and pairing methods makes analysis easily improved. In the future, the best leaders will evaluate the most paths and use reliable methods to gradually reevaluate and adjust based on enhanced feedback. They will not “pick one path and stick to it” because situational awareness is more powerful and can even be more consistent with values (maintaining moral high-ground by correcting errors rather than doubling-down).

I’ve managed to avoid making any reference to football. Yet at the end of the day isn’t this all really about an American ideal of industrialization? Run a play. Evaluate. Run another play. Evaluate. America is entering a world of cyber more like soccer (the real football) that is far more fluid and dynamic. Baseball has the same problem. Even basketball has shades of industrialization with machine-like plays. A highly-structured top-down competitive system that America was built upon and that it has used for conflict dominance is facing a new game with new rules that requires more adaptability; intelligence unlocked from set paths.

Update 24 Jan: Added more original text of first quote for better context per comment by Robert Lee below.

Russian Cables to North Korea: Lessons From the Sony Cyberattack

Background and History

On November 24, 2014, employees at Sony Pictures Entertainment arrived at work to find their computer screens displaying a red skeleton and threats from a group calling itself “Guardians of Peace.” Within hours, the company’s network was gutted. Unreleased films, executive emails, salary data, and social security numbers for 47,000 current and former employees were exfiltrated and systematically dumped online. The attack coincided with Sony’s planned release of “The Interview,” a comedy depicting the assassination of Kim Jong-un.

By December, the FBI had attributed the attack to North Korea. This attribution was met with immediate skepticism from portions of the security community, not because defending the DPRK seemed appealing, but because the technical evidence presented publicly was thin and the geopolitical convenience was obvious. The debate quickly polarized into camps: those accepting the government’s word and those demanding proof.

Disaster Recovery

What makes the Sony breach remarkable isn’t the exfiltration, since that’s so common, but the angle of destruction. The attackers deployed wiper malware that rendered systems unbootable, forcing Sony to revert to fax machines, pencil and paper checks for weeks. This went beyond espionage into punishment as proof. The operational tempo suggested planning and resources far beyond disgruntled insiders, the theory floated by some skeptics. The sophistication of destruction was good enough, we were left with little to say about who held the match.

Material Impact

Sony’s losses extended beyond the $35 million in immediate IT remediation. The company pulled a film “The Interview” from theatrical release after theater chains received threats, then reversed course under public pressure. Executives resigned. Lawsuits mounted. The strategic value of the attack demonstrating that a major American corporation could be brought to its knees, and made to self-censor, far exceeded whatever intelligence value the stolen emails provided. Someone invested significant resources for a demonstration of power.

Sophisticated Attack

Here’s where the attribution debate gets most interesting. Critics of the FBI’s conclusion often argue that North Korea is too isolated and therefore lacks technical capability for such an operation. The DPRK is portrayed as a hermit kingdom where citizens have no internet access and technology stopped advancing in 1953.

This framing is wrong, and lazy.

First, North Korea worked to a stalemate in war by effectively disappearing. They know appearing incompetent or not at all, forcing capabilities underground, is a tactical advantage. Second, it confuses the known poverty of North Korean citizens with the unknown capabilities of its military and intelligence services. States that leak failures to feed their populations can still build cyber weapons let alone nukes; states with limited civilian internet can still run offensive operations. The question isn’t whether every North Korean has broadband access. The question is whether intelligence services have the infrastructure and connectivity anywhere anytime to project power through networks.

US Military Industrial Congressional Complex

The rush to attribute and the subsequent calls for retaliation fit a familiar pattern. Cyber Pearl Harbor rhetoric has been building for years, and the defense establishment always seems to need demonstrable threats to justify budgets. Motivated reasoning cuts both directions, and skepticism of government claims can be just as reflexive as acceptance. We should examine the actual evidence rather than accepting appeals to classified sources. When Director Clapper tells us the evidence is compelling but we can’t see it, we’re being asked to trust institutions with documented records of deception on matters of war and peace.

Cold/Proxy Wars

The Sony hack exists within a longer history. The Korean War never formally ended. The DPRK has been under American sanctions for decades. Both nations have reasons to view the other as an adversary, and both have conducted operations against each other. North Korean defectors report that cyber operations are a priority investment precisely because they offer asymmetric advantages against a conventionally superior adversary. None of this proves attribution in a specific case, but it establishes that North Korea has motive, has stated intent, and has been building capability. The question becomes: what capability, exactly?

Attribution: DPRK Use of Technology

Apparently there are over 2 million 3G users in Pyongyang, as the Google CEO mentioned on Google+

North Korean limits right now.

There is a 3G network that is a joint venture with an Egyptian company called Orascom. It is a 2100 Megahertz SMS-based technology network, that does not, for example, allow users to have a data connection and use smart phones. It would be very easy for them to turn the Internet on for this 3G network. Estimates are that are about a million and a half phones in the DPRK with some growth planned in the near future.

There is a supervised Internet and a Korean Intranet. (It appeared supervised in that people were not able to use the internet without someone else watching them). There’s a private intranet that is linked with their universities. Again, it would be easy to connect these networks to the global Internet.

Schmidt’s observations from his 2013 visit are useful but incomplete. He describes what ordinary North Koreans can access, not what the state’s offensive capabilities look like. A country can maintain a locked-down domestic internet while running sophisticated external operations—in fact, that’s precisely the configuration you’d expect from a surveillance state that also wants to project cyber power.

North Korean links to the Internet

In May 2006 TransTeleCom Company and North Korea’s Ministry of Communications signed an agreement for the construction and joint operation of a fiber-optic transmission line in the section of the Khasan–Tumangang railway checkpoint. This connects North Korea through a fiber optic cable with Vladivostok, crossing the Russia-North Korea border at Tumangang.

I also read a while ago that the Egyptian company Orascom had setup North Korea’s Koryolink. Then I noticed Russians were taking a massive interest in Orascom. This looked a little over-hyped yet business is business and there was a good chance a real network was being developed for general public use. Orascom was kind enough to release marketing material that provided a (simulated) map using the Huawei OptiX iManager

koryolink

Everyone knows that mobile phones are the future of Internet use, especially in emerging markets. Although I have read about extensive sneakernet access (smuggled storage devices plugged into laptops in remote cabins) that obviously doesn’t scale. Instead the Orascom network is supposedly leading to a boom in cellphone adoption.

cellphone-adoption-nk

My thought after reading the Orascom connection is that they’re probably going to link up to Russian telecom. Russians moving in on Orascom suggests they would continue investing more broadly, connecting to back-haul and other trade routes. A quick check of flights, to my surprise, showed indeed there were trips regularly going north into Russia. Although I expect to see flights to China, instead I found a very good indication that there were legs to the Russian investment direction a few years ago.

nk-ru-flight

Flights definitely reveal important and current trade links. But we still need to be on the ground to establish knowledge about topology and routes for cables.

A quick and easy answer was to look at telecom companies bragging about their upgrades. For example, here’s a 2011 Press Release called “LEADING RUSSIAN SERVICE PROVIDER FUTURE-PROOFS HIGH-PERFORMANCE INFRASTRUCTURE WITH JUNIPER NETWORKS“.

TTK (TransTeleCom), a major communication provider in Russia that operates one of the country’s largest fiber networks, has chosen Juniper Networks MX Series 3D Universal Edge Routers to provide the Ethernet bandwidth required to ensure high-speed connectivity while delivering innovative applications and services such as IP-VPN, L2-VPN, video conference across Russia and IP-transit to its subscribers across 75,000 km of cables running along railway lines and 1,000 access nodes

Sounds like Juniper could be running the backbone we’re going to be looking into for North Korean traffic, no? Notice below a red line all the way to the right that takes a straight north-south run towards North Korea. If you squint you can even see a giant grey arrow indicating service going directly to…yup, North Korea.

transtelecommap

This is just a PR map, however. Let’s back it up a little. That 75,000km of fiber connecting 1,000 nodes in trans-asia follows the established trade paths cut by rail, of course. Trains tend to do a marvelous job of providing insights for data (much less expensive to reuse existing paths and access is obvious). One might even argue trains are more appropriate than planes to show trends in the DPRK, given how much the leaders have said they prefer trains.

Other than using obvious and easy PR, or following rail, a fun next step might have been to poke around shipping and undersea cables. There seemed no reason to believe a cable would go undersea (except maybe further north to Japan or following the oil pipeline project that Japan funded in Nakhodka). After following pipelines and undersea routes to Russian borders (basically none of interest) I put that idea on the side-burner and looked more closely again at the railroads to border areas.

My eye ended up getting caught on the railroad crossing just southeast of a finger of Chinese territory, about 20km upstream of entry into the Sea of Japan. At Tumangan sits a railroad bridge connecting North Korea to Russia across a river. On several different maps I found Russia extends over the river to the south bank, perhaps cementing against any claims of China extending further to the Sea. Nokia’s map makes it most clear.

nk-khasanskiy-nokiamap

The proximity of countries seems to be settled by the Beijing Agreement of 1860. Perusing eye-level photographs uncovered this explanation due to Russian and Chinese border posts sitting literally right next to each other without any divider.

Russia pillar - earth plate (Qing Li)
Russia pillar by hanjiang.dudiao

Speaking of observations at eye-level, the river has very low banks that probably move (just the north four spans are over water, out of eight total, potentially explaining a border being so far south). Satellite images show crops in the fields ruined by flooding. It also is clear the bridge is for trains, while wires are not seen. The bridge seems to be quite the serious construction for such a rural area. It’s not magnificent yet it suggests ability to handle heavy/industrial loads.

Khasansky District, Primorsky Krai, Russia
Khasansky District, Primorsky Krai, Russia by EdwMac

All very interesting points about this area but let’s get back to train routes. I dig a little for routes running southbound towards North Korea from Russia. Ask me sometime about how I almost accidentally ended up in Ukraine while traveling on a midnight train through Hungary. Anyway there are two trains a day from Khabarovsk, Khabarovsk Krai (four trains a day northbound) crossing 800km in about 12 hours.

nk-southboundtrain

Fortuitously I also find a ticket from a passenger traveling from Russia to Pyongyang via Tumangan, crossing from a Russian station in a border town called Khasan.


Train ticket “Pyongyang via Tumangan” by Helmut

Here’s the train just before it crosses from Khasan into Tumangan. Very nice picture.

Train in Khasan, Primorsky Krai, Russia just across the border from North Korea
Train in Khasan, Primorsky Krai, Russia just across the border from North Korea by mwbild

I poke around the Khasan train ride details, looking for cables and lines headed southbound. The Russian end of the bridge doesn’t look promising. There is a lot going on in this photo but not enough to say cables are running through.

nk-rivercrossing

So I keep poking and take a look from the other end. The North Korean end of the bridge tells a different story.

nk-rivercrossing-southend

Bingo. See the left side? Cables on a pole extending to supports on the bridge, running into Russia. I would love to go on yet it feels like we’re at a point where we have achieved what we set out to accomplish: clear evidence North Korea has infrastructure along trade routes connecting directly to Russia.

A Bridge to Somewhere

So where does this leave us on attribution? I have not yet proven North Korea hacked Sony. What I’ve demonstrated is that the “North Korea couldn’t possibly have the infrastructure” argument doesn’t survive contact with publicly available evidence.

The DPRK has fiber optic connectivity to Russia through TransTeleCom, running along established rail corridors, crossing at the same Khasan-Tumangang bridge that’s been moving trains and trade for decades. The 2006 agreement between TTK and North Korea’s Ministry of Communications predates the Sony hack by eight years. This isn’t speculative future capability because it’s installed infrastructure.

The hermit kingdom narrative serves multiple interests. It lets skeptics dismiss attribution without examining evidence. It lets the US government claim unique insight that only classified sources can provide. And it lets everyone avoid the harder question: if North Korea does have offensive cyber capability enabled by Russian and Chinese infrastructure, what does that mean for how we think about state-sponsored attacks, sanctions regimes, and the geography of the internet?

Cables follow rail lines because that’s where the rights-of-way are. They cross borders where bridges already exist. The internet is real copper and glass running through physical territory controlled by states with their own interests. North Korea’s connectivity runs through Russia and China because that’s who shares borders and has reasons to maintain the connection. Understanding that topology matters more than arguing about whether we have receipts for Kim Jong-un personally approving anything like a hack.

I don’t know yet who hit Sony. But I know who could have, and I know the infrastructure to do it runs across a bridge I can show you on a map.

Was Stuxnet the “First”?

My 2011 presentation on Stuxnet was meant to highlight a few basic concepts. Here are two:

  • Sophisticated attacks are ones we are unable to explain clearly. Spoons are sophisticated to babies. Spoons are not sophisticated to long-time chopstick users. It is a relative measure, not an absolute one. As we increase our ability to explain and use things they become less sophisticated to us. Saying something is sophisticated really is to communicate that we do not understand it, although that may be our own fault.
  • Original attacks are ones we have not seen before. It also is a relative measure, not an absolute one. As we spend more time researching and observing things, fewer things will be seen as original. In fact with just a little bit of digging it becomes hard to find something completely original rather than evolutionary or incremental. Saying something is original therefore is to say we have not seen anything like it before, although that may be our own fault.

Relativity is the key here. Ask yourself if there is someone to easily discuss attacks with to make them less sophisticated and less original. Is there a way to be less in awe and more understanding? It’s easy to say “oooh, spoon” and it should not be that much harder to ask “anyone seen this thing before?”

Here’s a simple thought exercise:

Given that we know critical infrastructure is extremely poorly defended. Given that we know control systems are by design simple. Would an attack designed for simple systems behind simple security therefore be sophisticated? My argument is usually no, that by design the technical aspects of compromise tend to be a low-bar…perhaps especially in Iran.

Since the late 1990s I have been doing assessments inside utilities and I have not yet found one hard to compromise. However, there still is a sophisticated part, where research and skills definitely are required. Knowing exactly how to make an ongoing attack invisible and getting the attack specific to a very intended result, that is a level above getting in and grabbing data or even causing harm.

An even more advanced attack makes trace/tracks of attack invisible. So there definitely are ways to bring sophistication and uniqueness level up substantially from “oooh, spoon” to “I have no idea if that was me that just did that”. I believe this has become known as the Mossad-level attack, at which point defense is not about technology.

I thought with my 2011 presentation I could show how a little analysis makes major portions of Stuxnet less sophisticated and less original; certainly it was not the first of its kind and it is arguable how targeted it was as it spread.

The most sophisticated aspects to me were in that it was moving through many actors across boundaries (e.g. Germany, Iran, Pakistan, Israel, US, Russia) requiring knowledge inside areas not easily accessed or learned. Ok, let’s face it. It turns out that thinking was on the right path, albeit an important role was backwards and I wasn’t sure where it would lead.

A US ex-intel expert mentioned on Twitter during my talk I had “conveniently” ignored motives. This is easy for me to explain: I focus on consequences as motive is basically impossible to know. However, as a clue that comment was helpful. I wasn’t thinking hard enough about the economic-espionage aspect that US intelligence agencies have revealed as a motivator. Recent revelations suggest the US was angry at Germany allowing technology into Iran. I had mistakenly thought Germany would have been working with the US, or Israel would have been able to pressure Germany. Nope.

Alas a simple flip of Germany’s role (critical to good analysis and unfortunately overlooked by me) makes far more sense because they (less often but similar to France) stand accused of illicit sales of dangerous technology to US (and friend of US) enemies. It also fits with accusations I have heard from US ex-intel expert that someone (i.e. Atomstroyexport) tipped-off the Germans, an “unheard of” first responder to research and report Stuxnet. The news cycles actually exposed Germany’s ties to Iran and potentially changed how the public would link similar or follow-up action.

But this post isn’t about the interesting social science aspects driving a geopolitical technology fight (between Germany/Russia and Israel/US over Iran’s nuclear program), it’s about my failure to make an impression enough to add perspective. So I will try again here. I want to address an odd tendency of people to continue to report Stuxnet as the first ever breach of its type. This is what the BSI said in their February 2011 Cyber Security Strategy for Germany (page 3):

Experience with the Stuxnet virus shows that important industrial infrastructures are no longer exempted from targeted IT attacks.

No longer exempted? Targeted attacks go back a long way as anyone familiar with the NIST report on the 2000 Maroochy breach should be aware.

NIST has established an Industrial Control System (ICS) Security Project to improve the security of public and private sector ICS. NIST SP 800-53 revision 2, December 2007, Recommended Security Controls for Federal Information Systems, provides implementing guidance and detail in the context of two mandatory Federal Information Processing Standards (FIPS) that apply to all federal information and information systems, including ICSs.

Note an important caveat in the NIST report:

…”Lessons Learned From the Maroochy Water Breach” refer to a non-public analytic report by the civil engineer in charge of the water supply and sewage systems…during time of the breach…

These non-public analytic reports are where most breach discussions take place. Nonetheless, there never was any exemption and there are public examples of ICS compromise and damage. NIST gives Maroochy from 2000. Here are a few more ICS attacks to consider and research:

  • 1992 Portland/Oroville – Widespread SCADA Compromise, Including BLM Systems Managing Dams for Northern California
  • 1992 Chevron – Refinery Emergency Alert System Disabled
  • 1992 Ignalina, Lithuania – Engineer installs virus on nuclear power plant ICS
  • 1994 Salt River – Water Canal Controls Compromised
  • 1999 Gazprom – Gas Flow Switchboard Compromised
  • 2000 Maroochy Shire – Water Quality Compromised
  • 2001 California – Power Distribution Center Compromised
  • 2003 Davis-Besse – Nuclear Safety Parameter Display Systems Offline
  • 2003 Amundsen-Scott – South Pole Station Life Support System Compromised
  • 2003 CSX Corporation – Train Signaling Shutdown
  • 2006 Browns Ferry – Nuclear Reactor Recirculation Pump Failure
  • 2007 Idaho Nuclear Technology & Engineering Complex (INTEC) – Turbine Failure
  • 2008 Hatch – Contractor software update to business system shuts down nuclear power plant ICS
  • 2009 Carrell Clinic – Hospital HVAC Compromised
  • 2013 Austria/Germany – Power Grid Control Network Shutdown

Fast forward to December 2014 and a new breach case inside Germany comes out via the latest BSI report. It involves ICS so the usual industry characters start discussing it.

Immediately I tweet for people to take in the long-view, the grounded-view, on German BSI reports.

Alas, my presentation in 2011 with a history of breaches and my recent tweets clearly failed to sway, so I am here blogging again. I offer as example of my failure the following headlines that really emphasize a “second time ever” event.

That list of four in the last article is interesting. Sets it apart from the other two headlines, yet it also claims “and only the second confirmed digital attack”? That’s clearly a false statement.

Anyway Wired appears to have crafted their story in a strangely similar fashion to another site; perhaps too similar to a Dragos Security blog post a month earlier (same day as the BSI tweets above).

This is only the second time a reliable source has publicly confirmed physical damage to control systems as the result of a cyber-attack. The first instance, the malware Stuxnet, caused damage to nearly 3,000 centrifuges in the Natanz facility in Iran. Stories of damage in other facilities have appeared over the years but mostly based on tightly held rumors in the Industrial Control Systems (ICS) community that have not been made public. Additionally there have been reports of companies operating in ICS being attacked, such as the Shamoon malware which destroyed upwards of 30,000 computers, but these intrusions did not make it into the control system environment or damage actual control systems. The only other two widely reported stories on physical damage were the Trans-Siberian-Pipeline in explosion in 1982 and the BTC Turkey pipeline explosion in 2008. It is worth noting that both stories have come under intense scrutiny and rely on single sources of information without technical analysis or reliable sources. Additionally, both stories have appeared during times where the reporting could have political motive instead of factuality which highlights a growing concern of accurate reporting on ICS attacks. The steelworks attack though is reported from the German government’s BSI who has both been capable and reliable in their reporting of events previously and have the access to technical data and first hand sources to validate the story.

Now here is someone who knows what they are talking about. Note the nuance and details in the Dragos text. So I realize my problem is with a Dragos post regurgitated a month later by Wired without attribution because look at how all the qualifiers disappeared in translation. Wired looks preposterous compared to this more thorough reporting.

The Dragos opening line is a great study in how to setup a series of qualifications before stepping through them with explanations:

This is only the second time a reliable source has publicly confirmed physical damage to control systems as the result of a cyber-attack

The phrase has more qualifications than Lance Armstrong:

  • Has to be a reliable source. Not sure who qualifies that.
  • Has to be publicly confirmed. Does this mean a government agency or the actual victim admitting breach?
  • Has to be physical damage to control systems. Why control systems themselves, not anything controlled by systems? Because ICS security blog writer.
  • Has to result from cyber-attack. They did not say malware so this is very broad.

Ok, Armstrong had more than four… Still, the Wired phrase by comparison uses dangerously loose adaptations and drops half. Wired wrote “This is only the second confirmed case in which a wholly digital attack caused physical destruction of equipment” and that’s it. Two qualifications instead of four.

So we easily can say Maroochy was a wholly digital attack that caused physical destruction of equipment. We reach the Wired bar without a problem. We’d be done already and Stuxnet proved to not be the first.

Dragos is harder. Maroochy also was from a reliable source, publicly confirmed resulting from packet-radio attack (arguably cyber). Only thing left here is physical damage to control systems to qualify. I think the Dragos bar is set oddly high to say the control systems themselves have to be damaged. Granted, ICS management will consider ICS damage differently than external harms; this is true in most industries, although you would expect it to be the opposite in ICS. To the vast majority, news of 800,000 released liters of sewage obviously qualifies as physical damage. So Maroochy would still qualify. Perhaps more to the point, the BSI report says the furnace was set to an unknown state, which caused breakdown. Maroochy had its controls manipulated to an unknown state, albeit not damaging the controls themselves.

If anyone is going to hang their hat on damage to control systems, the perhaps they should refer to it as an Aurora litmus, given the infamous DHS study of substations in 2007 (840pg PDF).

aurora

The concern with Aurora, if I understood the test correctly, was not to just manipulate the controls. It was to “exploit the capability of modern protective equipment and cause them to serve as a destructive weapon”. In other words, use the controls that were meant to prevent damage to cause widespread damage instead. Damage to just controls themselves without wider effect would be a premature end to a cyber-physical attack, albeit a warning.

I’d love to dig into that BTC Turkey pipeline explosion in 2008, since I worked on that case at the time. I agree with the Dragos blog it doesn’t qualify, however, so I have to move on. Before I do, there is an important lesson from 2008.

Suffice it to say I was on press calls and I gave clear and documented evidence to those interviewed about cyber attack on critical infrastructure. For example, the Georgia official complaint listed no damage related to cyber attack. The press instead ran a story, without doing any research, using hearsay that Russia knocked the Georgian infrastructure off-line with cyber attack. That often can be a problem with the press and perhaps that is why I am calling Wired out here for their lazy title.

Let’s look at another example, the 2007 TCAA, from a reliable source, publicly confirmed, causing damage to control systems, caused by cyber-attack:

Michael Keehn, 61, former electrical supervisor with Tehama Colusa Canal Authority (TCAA) in Willows, California, faces 10 years in prison on charges that he “intentionally caused damage without authorization to a protected computer,” according to Keehn’s November 15 indictment. He did this by installing unauthorized software on the TCAA’s Supervisory Control and Data Acquisition (SCADA) system, the indictment states.

Perfect example. Meets all four criteria. Sounds bad, right? Aha! Got you.

Unfortunately this incident turns out to be based only an indictment turned into a news story, repeated by others without independent research. Several reporters jumped on the indictment, created a story, and then moved on. Dan Goodin probably had the best perspective, at least introducing skepticism about the indictment. I put the example here not only to trick the reader, but also to highlight how seriously I take the question of “reliable source”.

Journalists often unintentionally muddy waters (pun not intended) and mislead; they can move on as soon as the story goes cold. What stake do they really have when spinning their headline? How much accountability do they hold? Meanwhile, those of us defending infrastructure (should) keep digging for truth in these matters, because we really need it for more than talking point, we need to improve our defenses.

I’ve read the court documents available and they indicate a misunderstanding about software developer copyright, which led to a legal fight, all of which has been dismissed. In fact the accused wrote a book afterwards called “Anatomy of a Criminal Indictment” about how to successfully defend yourself in court.

In 1989 he applied for a job with the Tehama-Colusa Canal Authority, a Joint Powers Authority who operated and maintained two United States Bureau of Reclamation canals. During his tenure there, he volunteered to undertake development of full automated control of the Tehama-Colusa Canal, a 110-mile canal capable of moving 2,000 cfs (cubic feet of water per second). It was out of this development for which he volunteered to undertake, that resulted in a criminal indictment under Title 18, Part I, Chapter 47, Section 1030 (Fraud and related activity in connection with computers). He would be under indictment for three years before the charges were dismissed. During these three years he was very proactive in his own defense and learned a lot that an individual not previously exposed would know about. The defense attorney was functioning as a public defender in this case, and yet, after three years the charges were dismissed under a motion of the prosecution.

One would think reporters would jump on the chance to highlight the dismissal, or promote the book. Sadly the only news I find is about the original indictment. And so we still find the indictment listed by information security references as an example of ICS attack, even though it was not. Again, props to Dragos blog for being skeptical about prior events. I still say, aside from Maroochy, we can prove Stuxnet not the first public case.

The danger in taking the wide-view is that it increases the need to understand far more details and do more deep research to avoid being misled. The benefit, as I pointed out at the start, is we significantly raise the bar for what is considered sophisticated or original attacks.

In my experience Stuxnet is a logical evolution, an application of accumulated methods within a context already well documented and warned about repeatedly. I believe putting it back in that context makes it more accessible to defenders. We need better definitions of physical damage and cyber, let alone reputable sources, before throwing around firsts and seconds.

Yes malware that deviates from normal can be caught, even unfamiliar malware, if we observe and respond quickly to abnormal behavior. Calling Stuxnet the “first” will perhaps garner more attention, which is good for eyeballs on headlines. However it also delays people from realizing how it fits a progression; is the adversary introducing never-seen-before tools and methods or are they just extremely well practiced with what we know?

The latest studies suggest how easy, almost trivial, it would be to detect Stuxnet for security analysts monitoring traffic as well as operations. Regardless of the 0day, the more elements of behavior monitored the higher the attacker has to scale. Companies like ThetaRay have been created on this exact premise, to automate and reduce the cost of the measures a security analyst would use to protect operations. (Already a crowded market)

That’s the way I presented it in 2011 and little has changed since then. Perhaps the most striking attempt to make Stuxnet stand out that I have heard lately was from ex-USAF staff; paraphrasing him, Stuxnet was meant to be to Iran what the atom bomb was to Japan. A weapon of mass-destruction to change the course of war and be apologized for later.

It would be interesting if I could find myself able to agree with that argument. I do not. But if I did agree, then perhaps I could point out in recent research, based on Japanese and Russian first-person reports, the USAF was wrong about Japan. Fear of nuclear assault, let alone mass casualties and destruction from the bombs, did not end the war with Japan; rather leadership gave up hope two days after the Soviets entered the Pacific Theater. And that should make you wonder really about people who say we should be thankful for the consequences of either malware or bombs.

But that is obviously a blog post for another day.

Please find below some references for further reading, which all put Stuxnet in broad context rather than being the “first”:

N. Carr, Development of a Tailored Methodology and Forensic Toolkit for Industrial Control Systems Incident Response, US Naval Postgraduate School 2014

A. Nicholson; S. Webber; S. Dyer; T. Patel; H. Janicke, SCADA security in the light of Cyber-Warfare 2012

C. Wueest, Targeted Attacks Against the Energy Sector, Symantec 2014

B. Miller; D. Rowe, A Survey of SCADA and Critical Infrastructure Incidents, SIGITE/RIIT 2012

C. Baylon; R. Brunt; D. Livingstone, Cyber Security at Civil Nuclear Facilities, Chatham House 2015

Movie Review: JSA (Joint Security Area)

A South Korean soldier slowly hands a shiny mechanical lighter to a North Korean soldier, as if to give thanks through transfer of better technology. The North Korean lights a cigarette and contemplates the object. The South Korean clarifies its value as “you can see yourself in the reflection; see how clean your teeth are”. This movie is full of clever and humorous juxtapositions, similar to questioning values of the urban cosmopolitan versus rural bumpkin.

220px-Jsa.movistThe area known as JSA (Joint Security Area) is a small section of the Demilitarized Zone (DMZ) between North and South Korea. The two countries have their military stationed literally standing face-to-face just a few feet from each other. Buildings in the area have served as meeting space, brokered by international oversight, and there is palpable tension in the air.

This movie draws the viewer into this feeling and the lives of soldiers suspended by two countries within an old armistice and trying to find ways around it; men and women trapped inside an internationally monitored agreement to postpone hostilities.

Primary roles are played by just four soldiers, two North and two South. Also stepping up to the dance are the investigators and observers, positioned in an awkward third role between the two sides.

The NNSC (Neutral Nations Supervisory Commission) and the US have a dominant secondary tier of influence to the dialogue. I found no mention of other global players, such as China or Russia. Perhaps the absence of these countries is explained by the fact this movie was released in 2000. Today it might be a different story.

Directed by Park Chan-wook the cultural perspective and references clearly are South Korean.

North Korea is portrayed in a surprising light as the more thoughtful and grounded of the two countries. The South is shown to be obsessed with shallow perfections, looking at itself and boasting about false success, while roles played by the North are either weary and wise or kind and naive. It is the US and UN that come out being the real villains in the script, perpetuating a civil war that would heal if only allowed by outside meddlers.

What comes across to me is a third-generation war movie; a Tarantino-syle M*A*S*H.

Col. Sherman T. Potter and Klinger in the famous TV series about futility of war, as seen through the lens of a Mobile Army Surgical Hospital (MASH) during the Korean War.

There is a strong pacifist-irony thread, clearly influenced by Tarantino’s style of borrow and remix old scenes from popular war/gangster movies using today’s direct approach. No subtlety will be found. The viewer is granted displays of full-gore slow-motion blood-splattering scenes of useless death, the sort of lens Tarantino developed as he grew up working in a Los Angeles video-rental store. John Wayne, for example, is played by the North Korean sergeant…

Chan-wook is quoted saying his movies highlight “the utter futility of vengeance and how it wreaks havoc on the lives of everyone involved”.

Despite the gore and sometimes strained irony, the film is suspenseful and on-target with much of its commentary. It offers a counter-intuitive story that veers uncomfortably close to glorifying the North and vilifying the US, delivering over-simplifications of civil war.

This is exactly the sort of popular cartoonist perspective many of us need to take into consideration, because it forces a rethink of how and where a “dark side” is being portrayed.

If Marvel were to dream up a superhero of South Korean origin it might have more shades of this plot than anything a US director would ever allow.

I give it four out of a classified number of penguins.