Skip to content


Beware the Sony Errorists

A BBC business story on the Sony breach flew across my screen today. Normally I would read through and be on my way. This story however was so riddled with strange and simple errors I had to stop and wonder; who really reads this without pause? Perhaps we need a Snopes of theories on hackers.

A few examples

Government-backed attackers have far greater resources at their disposal than criminal hacker gangs…

False. Criminal hacker gangs can amass far greater resources more quickly than government-backed ones. Consider how criminal gangs operate relative to the restrictions of the “governed”. Government-backed groups have constraints on budget, accountability, jurisdiction…. I am reminded of the Secret Service agent who told me how he had to scrape and toil for months to bring together an international team with resources and approval. Finally getting approval his group descended in a helicopter onto the helipad of a criminal property that was literally a massive gilded castle surrounded by exotic animals and vehicles. Gov agencies were outclassed on almost every level yet careful planning, working together and correct timing were on their side. The bust was successful despite strained resources across several countries.

Of course it is easy to find opposite examples. The government invests in the best equipment to prepare for some events and clearly we see “defense” budgets swell. This is not the point. In many scenarios of emerging technology you find innovation and resources are handled better by criminal gangs who lack constraints of being governed — criminals can be as lavish or unreasonable as they decide. Have you noticed anyone lately talking about how Apple or Google have more money than Russia?

Government-backed hackers simply won’t give up…

False. This should be self-evident from the answer above. Limited resources and even regime change are some of the obvious reasons why government-backed anything will give up. In the context of North Korea, let alone wider history of conflict, we only have to look at a definition of the current armistice that is in place: “formal agreement of warring parties to stop fighting”.

Two government-backed sides in Korea formally “gave up” and signed an armistice agreement July 27, 1953. At 10 a.m..

Perhaps some will not like this example because North Korea is notorious for nullifying the armistice as a negotiation tactic. Constant reminders of its intent for reunification seem like it has refused to give up. I’d disagree, on the principle of what armistice means. Even so let’s consider instead the U.S. role in Vietnam. On January 27, 1973 an “Ending the War and Restoring Peace in Viet-Nam” Agreement was signed by the U.S. and others in conflict; by the end of 1973 the U.S. had unquestionably given up attacks and three years later North and South were united.

I also am tempted to point to famous pirates (Ching Shih or Peter Easton) who “gave up” after a career of being sponsored by various states to attack others. They simply inked a deal with one sponsor to retire.

“What you need is a bulkhead approach like in a ship: if the hull gets breached you can close the bulkhead and limit the damage…

True with Serious Warning. To put it simply, bulkheads are a tool, not a complete solution. This is the exact mistake that led to the Titanic disaster. A series of bulkheads (with some fancy new technology hand-waving of the time) were meant to keep the ship safe even when breached. This led people to refer to the design as “unsinkable”. So if the Titanic sank how can bulkheads still be a thing to praise?

I covered this in my keynote presentation for the 2010 RSA Conference in London. Actually, I wasn’t expecting to be a keynote and packed my talk with details. Then I found myself on main stage, speaking right after Richard Clarke, which made it awkward to fit in my usual pace of delivery. Anyway, here’s a key slide of the keynote.

B8-T_gPIQAEKjaw

The bulkheads gave a false sense of confidence, allowing a greater disaster to unfold for a combination of reasons. See how “wireless issues” and “warnings ignored” and “continuing to sail” and “open on top” start to add up? In other words if you hit something and detect a leak you tend to make an earlier assessment and a more complete one — one that affects the whole ship. If you instead think “we’ve got bulkheads keep going” a leak that could be repaired or slowed turns very abruptly into a terminal event, a sinking.

Clearly Sony had been breached in one of their bulkheads already. We saw the Playstation breach in 2011 have dramatic and devastating impact. Sony kept sailing, probably with warnings ignored elsewhere, communications issues, and thinking improvements in one bulkhead area of the company was sufficient. Another breach devastated them in 2013 and they continued along…so perhaps you can see how bulkheads are a tool that offer great promise yet require particular management to be effective. Bulkheads all by themselves are not a “need”. Like a knife, or any other tool that makes defense easier, what people “need” is to learn how to use them properly — keep the pointy side in the right direction.

Another way of looking at the problem

The rest of the article runs through a mix of several theories.

One theory mentioned is to delete data to avoid breaches. This is good specific advice, not good general advice. If we were talking about running out of storage room people may look at deletion as a justified option. If the data is not necessary to keep and carries a clear risk (e.g. post-authorization payment card data fines) then there is a case to be made. And in the case of regulation then the data to be deleted is well-defined. Otherwise deleting poorly-defined data actually can make things worse through rebellion.

A company tells its staff that the servers will be purging data and you know what happens next? Staff start squirreling away data on every removable storage devices and cloud provider they can find because they still see that data as valuable, necessary to be successful, and there’s no real penalty for them. Moreover, telling everyone to delete email that may incriminate is awkward strategy advice (e.g. someone keeps a copy and you delete yours, leaving you without anything to dispute their copy with). Also it may be impossible to ask this of environments where data is treated as a formal and permanent record. People in isolation could delete too much or the wrong stuff, discovered too late by upper management. Does that risk outweigh the unknown potential of breach? Pushing a risk decision away from the captain of a ship and into bulkheads without good communication can lead to Titanic miscalculations.

Another theory offered is to encrypt and manage keys perfectly. Setting aside perfect anything management, encryption is seriously challenged by an imposter event like Sony. A person inside an environment can grab keys. Once they have the keys they have to be stopped by a requiring other factors of identification. Asking the imposter to provide something they have or something they are is where the discussion often will go — stronger authentication controls both to prevent attacks spreading and also to help alert management to a breach in progress. Achieving this tends to require better granularity in data (fewer bulkheads) and also more of it (fewer deletions). The BBC correctly pointed out that there is balance yet by this point the article is such a mess they could say anything in conclusion.

What I am saying here is think carefully about threats and solutions if you want to manage them. Do not settle on glib statements that get repeated without much thought or explanation, let alone evidence. Containment can work against you if you do not manage it well, adding cost and making a small breach into a terminal event. A boat obviously will use any number of technologies, new and old, to keep things dry and productive inside. A lot of what is needed relates to common sense about looking and listening for feedback. This is not to say you need some super guru as captain; rather it is the opposite. You need people committed to improvement, to reporting things are not as they should be, in order to achieve a well-run ship.

Those are just some quick examples above and how I would position things differently. Nation-states are not always in a better position. Often they are hindered. Attackers have weaknesses and commitments. Finding a way to make them stop is not impossible. And ultimately, throwing around analogies is GREAT as long as they are not incomplete or applied incorrectly. Hope that helps clarify how to use a little common sense to avoid errors being made in journalist stories on the Sony breach.

Posted in Security.


Gov Fumbles Over-Inflated Sony Hack Attribution Ball

This (draft) post basically comes after reading one called “The Feds Got the Sony Hack Right, But the Way They’re Framing It Is Dangerous” by Robert Lee. Lee stated:

At its core, the debate comes down to this: Should we trust the government and its evidence or not? But I believe there is another view that has not been widely represented. Those who trust the government, but disagree with the precedent being set.

Lee is not the only person in government referring to this core for the debate. It smacks of being forced by those in government to choose one side or the other, for or against them. Such a binary depiction of governance, such a call for obedience, is highly politically charged. Do not accept it.

I will offer two concepts to help with the issue of choosing a path.

  1. Trust but Verify (As Gorbachev Used to Tell Reagan)
  2. Agile and Social/Pair Development Methods

So here is a classic problem: non-existent threats get over inflated because of secret forums and debates. Bogus reports and false pretense could very well be accidents, to be quickly corrected, or they may be intentionally used to justify policies and budgets requiring more concerted protest.

If you know the spectrum are you actually helping improve trust in government overall by working with them to eliminate error or correct bias? How does trusting government and its evidence while also wanting to also improve government fit into the sides Lee quotes? It seems far more complicated than writing off skeptics as distrustful of government. It also has been proven that skeptics help preserve trust in government.

Take a moment to look back at a false attribution blow-up of 2011:

Mimlitz says last June, he and his family were on vacation in Russia when someone from Curran Gardner called his cell phone seeking advice on a matter and asked Mimlitz to remotely examine some data-history charts stored on the SCADA computer.

Mimlitz, who didn’t mention to Curran Gardner that he was on vacation in Russia, used his credentials to remotely log in to the system and check the data. He also logged in during a layover in Germany, using his mobile phone. …five months later, when a water pump failed, that Russian IP address became the lead character in a 21st-century version of a Red Scare movie.

Everything deflated after the report was investigated due to public attention. Given the political finger-pointing that came out afterwards it is doubtful that incident could have received appropriate attention in secret meetings. In fact, much of the reform of agencies and how they handle investigations comes as a result of public criticism of results.

Are external skepticism and interest/pressure the key to improving trust in government? Will we achieve more accurate analysis through more parallel and open computations? The “Big Data” community says yes. More broadly speaking so many have emulated the Aktenzeichen XY … ungelöst “help police solve crimes” TV show since it started in 1967, a general population also probably would agree.

Trust but Verify

British Prime Minister Margaret Thatcher famously once quipped “Standing in the middle of the road is very dangerous; you get knocked down by the traffic from both sides.” Some might take this to mean it is smarter to go with the flow. As Lee highlighted, they say pick a side either for trust in government or against. Actually, it often turns out to be smarter to reject this analogy.

Imagine flying a plane. Which “side” do you fly on when you see other planes flying in no particular direction? Thatcher was renowned for false choice risk-management, a road with only two directions where everyone chooses sides without exceptions. She was adamantly opposed to Gorbachev tearing down the wall, for example, because it did not fit her over-simplified risk management theory. Verification of safety is so primitive in her analogy as to be worthless to real-world management.

Asking for verification should be a celebration of government and trust. We trust our government so much, we do not fear to question its authority. Auditors, for example, look for errors or inconsistencies in companies without being seen as a threat to trust in those companies. Executives further strengthen trust through skepticism and inquiry.

Consider for a moment an APT (really, no pun intended) study called “Decisive action: How businesses make decisions and how they could do it better“. It asked “when taking a decision, if the available data contradicted your gut feeling, what would you do?”

APT-doubt

Releasing incomplete data could be reasonably expected to have 90% push back for more data or more analysis, according to this study. Those listening to the FBI claim North Korea is responsible probably have a gut feeling contradicting the data. That gut feeling is more “are we supposed to accept incomplete data as proof of something, because been there done that, let’s keep going” than it is “we do not trust you”.

In the same study 38% said decisions are better when more people are involved, and 38% said more people did not help, so quantity alone isn’t the route to better outcomes. Quality remains a factor, so there has to be a reasonable bar to input, as we have found in Big Data environments. The remaining 25% in the survey could tip the scale on this point, yet they said they were still collecting and reanalyzing data.

My argument here is you can trust and you still can verify. In fact, you should verify where you want to maintain or enhance trust in leadership. Experts definitely should not be blandly labelled as anti-government (the 3% who ignore) when they ask for more data or do reanalysis (the 90% who want to improve decision-making).

Perhaps Mitch Hedberg put it best:

I bought a doughnut and they gave me a receipt for the doughnut. I don’t need a receipt for a doughnut. I just give you the money, you give me the doughnut. End of transaction. We don’t need to bring ink and paper into this. I just can not imagine a scenario where I had to prove I bought a doughnut. Some skeptical friend. Don’t even act like I didn’t get that doughnut. I got the documentation right here. Oh, wait it’s back home in the file. Under D.

We have many doughnut scenarios with government. Decisions are easy. Pick a doughnut, eat it. At least 10% of the time we may even eat a doughnut when our gut instinct says do not because impact seems manageable. The Sony cyberattack however is complicated with potentially huge/unkown impact and where people SHOULD imagine a scenario requiring proof. It’s more likely in the 90% range where an expert simply going along with it would be exhibiting poor leadership skills.

So debate actually boils down to this: should the governed be able to call for accountability from their government without being accused of complete lack of trust? Or perhaps more broadly should the governed have the means to immediately help improve accuracy and accountability of their government, provide additional resources and skills to make their government more effective?

Agile and Social/Pair Development Methods

In the commercial world we have seen a massive shift in IT management from waterfall and staged progress (e.g. environments with rigorously separated development, test, ready, release, production) to developers frequently running operations. Security in operations has had to keep up and in some cases lead the evolution.

Given the context above, where embracing feedback-loops leads to better outcomes, isn’t government also facing the same evolutionary path? The answer seems obvious. Yes, of course government should be inviting criticism and be prepared to adapt and answer, moving development closer to operations. Criticisms could even be more manageable by nature of a process where they occur more frequently in response to smaller updates.

Back to Lee’s post, however, he suggests an incremental or shared analysis would be a path to disaster.

The government knew when it released technical evidence surrounding the attack that what it was presenting was not enough. The evidence presented so far has been lackluster at best, and by its own admission, there was additional information used to arrive at the conclusion that North Korea was responsible, that it decided to withhold. Indeed, the NSA has now acknowledged helping the FBI with its investigation, though it still unclear what exactly the nature of that help was.

But in presenting inconclusive evidence to the public to justify the attribution, the government opened the door to cross-analysis that would obviously not reach the same conclusion it had reached. It was likely done with good intention, but came off to the security community as incompetence, with a bit of pandering.

[…]

Being open with evidence does have serious consequences. But being entirely closed with evidence is a problem, too. The worst path is the middle ground though.

Lee shows us a choice based on false pretense of two sides and a middle full of risk. Put this in context of IT. Take responsibility for all the flaws and you delay code forever. Give away all responsibility for flaws and your customers go somewhere else. So you choose a reasonable release schedule that has removed major flaws while inviting feedback to iterate and improve before next release. We see software continuously shifting towards the more agile model, away from internal secret waterfalls.

Lee gives his ultimate example of danger.

This opens up scary possibilities. If Iran had reacted the same way when it’s nuclear facility was hit with the Stuxnet malware we likely would have all critiqued it. The global community would have not accepted “we did analysis but it’s classified so now we’re going to employ countermeasures” as an answer. If the attribution was wrong and there was an actual countermeasure or response to the attack then the lack of public analysis could have led to incorrect and drastic consequences. But with the precedent now set—what happens next time? In a hypothetical scenario, China, Russia, or Iran would be justified to claim that an attack against their private industry was the work of a nation-state, say that the evidence is classified, and then employ legal countermeasures. This could be used inappropriately for political posturing and goals.

Frankly this sounds NOT scary to me. It sounds par for the course in international relations. The 1953 US decision to destroy Iran’s government at the behest of UK oil investors was the scary and ill-conceived reality, as I explained in my Stuxnet talk.

One thing I repeatedly see Americans fail to realize is that the world looks in at America playing a position of strength unlike others, jumping into “incorrect and drastic consequences”. Internationally the one believed most likely to leap without support tends to be the one who perceives they have the most power, using an internal compass instead of true north.

What really is happening is those in American government, especially those in the intelligence and military communities, are trying to make sense of how to achieve a position of power for cyber conflict. Intelligence agencies seek to accumulate the most information, while those in the military contemplate definitions of winning. The two are not necessarily in alignment since some definitions of winning can have a negative impact on the ability to gather information. And so a power struggle is unfolding with test scenarios indispensable to those wanting to establish precedent and indicators.

This is why moving towards a more agile model, away from internal secret waterfalls, is a smart path. The government should be opening up to feedback, engaging the public and skeptics to find definitions in unfamiliar space. Collecting and analyzing data are becoming essential skills in IT because they are the future of navigating a world without easy Thatcher-ish “sides” defined. Lee concludes with the opposite view, which again presents binary options.

The government in the future needs to pick one path and stick to it. It either needs to realize that attribution in a case like this is important enough to risk disclosing sources and methods or it needs to realize that the sources and methods are more important and withhold attribution entirely or present it without any evidence. Trying to do both results in losses all around.

Or trying to do both could help drive a government out of the dark ages of decision-making tools. Remember the inability of a certain French General to listen to the skeptics all around him saying German invasion through the forest was imminent? Remember how that same General refused to use radio for regular updates, sticking to a plan, unlike his adversaries on their way to overtake his territory with quickly shifting paths and dynamic plans?

Bureaucracy and inefficiency leads to strange overconfidence and comfort in “sides” rather than opening up to unfamiliar agile and adaptive thinking. We should not confuse the convenience in getting everyone pointed in the same direction with true preparation and skills to avoid unnecessary losses.

The government should evolve away from tendencies to force complex scenarios into false binary choices, especially where social and pairing methods makes analysis easily improved. In the future, the best leaders will evaluate the most paths and use reliable methods to gradually reevaluate and adjust based on enhanced feedback. They will not “pick one path and stick to it” because situational awareness is more powerful and can even be more consistent with values (maintaining moral high-ground by correcting errors rather than doubling-down).

I’ve managed to avoid making any reference to football. Yet at the end of the day isn’t this all really about an American ideal of industrialization? Run a play. Evaluate. Run another play. Evaluate. America is entering a world of cyber more like soccer (the real football) that is far more fluid and dynamic. Baseball has the same problem. Even basketball has shades of industrialization with machine-like plays. A highly-structured top-down competitive system that America was built upon and that it has used for conflict dominance is facing a new game with new rules that requires more adaptability; intelligence unlocked from set paths.

Update 24 Jan: Added more original text of first quote for better context per comment by Robert Lee below.

Posted in History, Security.


Was Stuxnet the “First”?

My 2011 presentation on Stuxnet was meant to highlight a few basic concepts. Here are two:

  • Sophisticated attacks are ones we are unable to explain clearly. Spoons are sophisticated to babies. Spoons are not sophisticated to long-time chopstick users. It is a relative measure, not an absolute one. As we increase our ability to explain and use things they become less sophisticated to us. Saying something is sophisticated really is to communicate that we do not understand it, although that may be our own fault.
  • Original attacks are ones we have not seen before. It also is a relative measure, not an absolute one. As we spend more time researching and observing things, fewer things will be seen as original. In fact with just a little bit of digging it becomes hard to find something completely original rather than evolutionary or incremental. Saying something is original therefore is to say we have not seen anything like it before, although that may be our own fault.

Relativity is the key here. Ask yourself if there is someone to easily discuss attacks with to make them less sophisticated and less original. Is there a way to be less in awe and more understanding? It’s easy to say “oooh, spoon” and it should not be that much harder to ask “anyone seen this thing before?”

Here’s a simple thought exercise:

Given that we know critical infrastructure is extremely poorly defended. Given that we know control systems are by design simple. Would an attack designed for simple systems behind simple security therefore be sophisticated? My argument is usually no, that by design the technical aspects of compromise tend to be a low-bar…perhaps especially in Iran.

Since the late 1990s I have been doing assessments inside utilities and I have not yet found one hard to compromise. However, there still is a sophisticated part, where research and skills definitely are required. Knowing exactly how to make an ongoing attack invisible and getting the attack specific to a very intended result, that is a level above getting in and grabbing data or even causing harm.

An even more advanced attack makes trace/tracks of attack invisible. So there definitely are ways to bring sophistication and uniqueness level up substantially from “oooh, spoon” to “I have no idea if that was me that just did that”. I believe this has become known as the Mossad-level attack, at which point defense is not about technology.

I thought with my 2011 presentation I could show how a little analysis makes major portions of Stuxnet less sophisticated and less original; certainly it was not the first of its kind and it is arguable how targeted it was as it spread.

The most sophisticated aspects to me were in that it was moving through many actors across boundaries (e.g. Germany, Iran, Pakistan, Israel, US, Russia) requiring knowledge inside areas not easily accessed or learned. Ok, let’s face it. It turns out that thinking was on the right path, albeit an important role was backwards and I wasn’t sure where it would lead.

A US ex-intel expert mentioned on Twitter during my talk I had “conveniently” ignored motives. This is easy for me to explain: I focus on consequences as motive is basically impossible to know. However, as a clue that comment was helpful. I wasn’t thinking hard enough about the economic-espionage aspect that US intelligence agencies have revealed as a motivator. Recent revelations suggest the US was angry at Germany allowing technology into Iran. I had mistakenly thought Germany would have been working with the US, or Israel would have been able to pressure Germany. Nope.

Alas a simple flip of Germany’s role (critical to good analysis and unfortunately overlooked by me) makes far more sense because they (less often but similar to France) stand accused of illicit sales of dangerous technology to US (and friend of US) enemies. It also fits with accusations I have heard from US ex-intel expert that someone (i.e. Atomstroyexport) tipped-off the Germans, an “unheard of” first responder to research and report Stuxnet. The news cycles actually exposed Germany’s ties to Iran and potentially changed how the public would link similar or follow-up action.

But this post isn’t about the interesting social science aspects driving a geopolitical technology fight (between Germany/Russia and Israel/US over Iran’s nuclear program), it’s about my failure to make an impression enough to add perspective. So I will try again here. I want to address an odd tendency of people to continue to report Stuxnet as the first ever breach of its type. This is what the BSI said in their February 2011 Cyber Security Strategy for Germany (page 3):

Experience with the Stuxnet virus shows that important industrial infrastructures are no longer exempted from targeted IT attacks.

No longer exempted? Targeted attacks go back a long way as anyone familiar with the NIST report on the 2000 Maroochy breach should be aware.

NIST has established an Industrial Control System (ICS) Security Project to improve the security of public and private sector ICS. NIST SP 800-53 revision 2, December 2007, Recommended Security Controls for Federal Information Systems, provides implementing guidance and detail in the context of two mandatory Federal Information Processing Standards (FIPS) that apply to all federal information and information systems, including ICSs.

Note an important caveat in the NIST report:

…”Lessons Learned From the Maroochy Water Breach” refer to a non-public analytic report by the civil engineer in charge of the water supply and sewage systems…during time of the breach…

These non-public analytic reports are where most breach discussions take place. Nonetheless, there never was any exemption and there are public examples of ICS compromise and damage. NIST gives Maroochy from 2000. Here are a few more ICS attacks to consider and research:

  • 1992 Portland/Oroville – Widespread SCADA Compromise, Including BLM Systems Managing Dams for Northern California
  • 1992 Chevron – Refinery Emergency Alert System Disabled
  • 1994 Salt River – Water Canal Controls Compromised
  • 1999 Gazprom – Gas Flow Switchboard Compromised
  • 2001 California – Power Distribution Center Compromised
  • 2003 Davis-Besse – Nuclear Safety Parameter Display Systems Offline
  • 2003 Amundsen-Scott – South Pole Station Life Support System Compromised
  • 2003 CSX Corporation – Train Signaling Shutdown
  • 2006 Browns Ferry – Nuclear Reactor Recirculation Pump Failure
  • 2007 Idaho Nuclear Technology & Engineering Complex (INTEC) – Turbine Failure
  • 2009 Carrell Clinic – Hospital HVAC Compromised
  • 2013 Austria/Germany – Power Grid Control Network Shutdown

Fast forward to December 2014 and a new breach case inside Germany comes out via the latest BSI report. It involves ICS so the usual industry characters start discussing it.

Immediately I tweet for people to take in the long-view, the grounded-view, on German BSI reports.

Alas, my presentation in 2011 with a history of breaches and my recent tweets clearly failed to sway, so I am here blogging again. I offer as example of my failure the following headlines that really emphasize a “second time ever” event.

That list of four in the last article is interesting. Sets it apart from the other two headlines, yet it also claims “and only the second confirmed digital attack”? That’s clearly a false statement.

Anyway Wired appears to have crafted their story in a strangely similar fashion to another site; perhaps too similar to a Dragos Security blog post a month earlier (same day as the BSI tweets above).

This is only the second time a reliable source has publicly confirmed physical damage to control systems as the result of a cyber-attack. The first instance, the malware Stuxnet, caused damage to nearly 3,000 centrifuges in the Natanz facility in Iran. Stories of damage in other facilities have appeared over the years but mostly based on tightly held rumors in the Industrial Control Systems (ICS) community that have not been made public. Additionally there have been reports of companies operating in ICS being attacked, such as the Shamoon malware which destroyed upwards of 30,000 computers, but these intrusions did not make it into the control system environment or damage actual control systems. The only other two widely reported stories on physical damage were the Trans-Siberian-Pipeline in explosion in 1982 and the BTC Turkey pipeline explosion in 2008. It is worth noting that both stories have come under intense scrutiny and rely on single sources of information without technical analysis or reliable sources. Additionally, both stories have appeared during times where the reporting could have political motive instead of factuality which highlights a growing concern of accurate reporting on ICS attacks. The steelworks attack though is reported from the German government’s BSI who has both been capable and reliable in their reporting of events previously and have the access to technical data and first hand sources to validate the story.

Now here is someone who knows what they are talking about. Note the nuance and details in the Dragos text. So I realize my problem is with a Dragos post regurgitated a month later by Wired without attribution because look at how all the qualifiers disappeared in translation. Wired looks preposterous compared to this more thorough reporting.

The Dragos opening line is a great study in how to setup a series of qualifications before stepping through them with explanations:

This is only the second time a reliable source has publicly confirmed physical damage to control systems as the result of a cyber-attack

The phrase has more qualifications than Lance Armstrong:

  • Has to be a reliable source. Not sure who qualifies that.
  • Has to be publicly confirmed. Does this mean a government agency or the actual victim admitting breach?
  • Has to be physical damage to control systems. Why control systems themselves, not anything controlled by systems? Because ICS security blog writer.
  • Has to result from cyber-attack. They did not say malware so this is very broad.

Ok, Armstrong had more than four… Still, the Wired phrase by comparison uses dangerously loose adaptations and drops half. Wired wrote “This is only the second confirmed case in which a wholly digital attack caused physical destruction of equipment” and that’s it. Two qualifications instead of four.

So we easily can say Maroochy was a wholly digital attack that caused physical destruction of equipment. We reach the Wired bar without a problem. We’d be done already and Stuxnet proved to not be the first.

Dragos is harder. Maroochy also was from a reliable source, publicly confirmed resulting from packet-radio attack (arguably cyber). Only thing left here is physical damage to control systems to qualify. I think the Dragos bar is set oddly high to say the control systems themselves have to be damaged. Granted, ICS management will consider ICS damage differently than external harms; this is true in most industries, although you would expect it to be the opposite in ICS. To the vast majority, news of 800,000 released liters of sewage obviously qualifies as physical damage. So Maroochy would still qualify. Perhaps more to the point, the BSI report says the furnace was set to an unknown state, which caused breakdown. Maroochy had its controls manipulated to an unknown state, albeit not damaging the controls themselves.

If anyone is going to hang their hat on damage to control systems, the perhaps they should refer to it as an Aurora litmus, given the infamous DHS study of substations in 2007 (840pg PDF).

aurora

The concern with Aurora, if I understood the test correctly, was not to just manipulate the controls. It was to “exploit the capability of modern protective equipment and cause them to serve as a destructive weapon”. In other words, use the controls that were meant to prevent damage to cause widespread damage instead. Damage to just controls themselves without wider effect would be a premature end to a cyber-physical attack, albeit a warning.

I’d love to dig into that BTC Turkey pipeline explosion in 2008, since I worked on that case at the time. I agree with the Dragos blog it doesn’t qualify, however, so I have to move on. Before I do, there is an important lesson from 2008.

Suffice it to say I was on press calls and I gave clear and documented evidence to those interviewed about cyber attack on critical infrastructure. For example, the Georgia official complaint listed no damage related to cyber attack. The press instead ran a story, without doing any research, using hearsay that Russia knocked the Georgian infrastructure off-line with cyber attack. That often can be a problem with the press and perhaps that is why I am calling Wired out here for their lazy title.

Let’s look at another example, the 2007 TCAA, from a reliable source, publicly confirmed, causing damage to control systems, caused by cyber-attack:

Michael Keehn, 61, former electrical supervisor with Tehama Colusa Canal Authority (TCAA) in Willows, California, faces 10 years in prison on charges that he “intentionally caused damage without authorization to a protected computer,” according to Keehn’s November 15 indictment. He did this by installing unauthorized software on the TCAA’s Supervisory Control and Data Acquisition (SCADA) system, the indictment states.

Perfect example. Meets all four criteria. Sounds bad, right? Aha! Got you.

Unfortunately this incident turns out to be based only an indictment turned into a news story, repeated by others without independent research. Several reporters jumped on the indictment, created a story, and then moved on. Dan Goodin probably had the best perspective, at least introducing skepticism about the indictment. I put the example here not only to trick the reader, but also to highlight how seriously I take the question of “reliable source”.

Journalists often unintentionally muddy waters (pun not intended) and mislead; they can move on as soon as the story goes cold. What stake do they really have when spinning their headline? How much accountability do they hold? Meanwhile, those of us defending infrastructure (should) keep digging for truth in these matters, because we really need it for more than talking point, we need to improve our defenses.

I’ve read the court documents available and they indicate a misunderstanding about software developer copyright, which led to a legal fight, all of which has been dismissed. In fact the accused wrote a book afterwards called “Anatomy of a Criminal Indictment” about how to successfully defend yourself in court.

In 1989 he applied for a job with the Tehama-Colusa Canal Authority, a Joint Powers Authority who operated and maintained two United States Bureau of Reclamation canals. During his tenure there, he volunteered to undertake development of full automated control of the Tehama-Colusa Canal, a 110-mile canal capable of moving 2,000 cfs (cubic feet of water per second). It was out of this development for which he volunteered to undertake, that resulted in a criminal indictment under Title 18, Part I, Chapter 47, Section 1030 (Fraud and related activity in connection with computers). He would be under indictment for three years before the charges were dismissed. During these three years he was very proactive in his own defense and learned a lot that an individual not previously exposed would know about. The defense attorney was functioning as a public defender in this case, and yet, after three years the charges were dismissed under a motion of the prosecution.

One would think reporters would jump on the chance to highlight the dismissal, or promote the book. Sadly the only news I find is about the original indictment. And so we still find the indictment listed by information security references as an example of ICS attack, even though it was not. Again, props to Dragos blog for being skeptical about prior events. I still say, aside from Maroochy, we can prove Stuxnet not the first public case.

The danger in taking the wide-view is that it increases the need to understand far more details and do more deep research to avoid being misled. The benefit, as I pointed out at the start, is we significantly raise the bar for what is considered sophisticated or original attacks.

In my experience Stuxnet is a logical evolution, an application of accumulated methods within a context already well documented and warned about repeatedly. I believe putting it back in that context makes it more accessible to defenders. We need better definitions of physical damage and cyber, let alone reputable sources, before throwing around firsts and seconds.

Yes malware that deviates from normal can be caught, even unfamiliar malware, if we observe and respond quickly to abnormal behavior. Calling Stuxnet the “first” will perhaps garner more attention, which is good for eyeballs on headlines. However it also delays people from realizing how it fits a progression; is the adversary introducing never-seen-before tools and methods or are they just extremely well practiced with what we know?

The latest studies suggest how easy, almost trivial, it would be to detect Stuxnet for security analysts monitoring traffic as well as operations. Regardless of the 0day, the more elements of behavior monitored the higher the attacker has to scale. Companies like ThetaRay have been created on this exact premise, to automate and reduce the cost of the measures a security analyst would use to protect operations. (Already a crowded market)

That’s the way I presented it in 2011 and little has changed since then. Perhaps the most striking attempt to make Stuxnet stand out that I have heard lately was from ex-USAF staff; paraphrasing him, Stuxnet was meant to be to Iran what the atom bomb was to Japan. A weapon of mass-destruction to change the course of war and be apologized for later.

It would be interesting if I could find myself able to agree with that argument. I do not. But if I did agree, then perhaps I could point out in recent research, based on Japanese and Russian first-person reports, the USAF was wrong about Japan. Fear of nuclear assault, let alone mass casualties and destruction from the bombs, did not end the war with Japan; rather leadership gave up hope two days after the Soviets entered the Pacific Theater. And that should make you wonder really about people who say we should be thankful for the consequences of either malware or bombs.

But that is obviously a blog post for another day.

Please find below some references for further reading, which all put Stuxnet in broad context rather than being the “first”:

N. Carr, Development of a Tailored Methodology and Forensic Toolkit for Industrial Control Systems Incident Response, US Naval Postgraduate School 2014

A. Nicholson; S. Webber; S. Dyer; T. Patel; H. Janicke, SCADA security in the light of Cyber-Warfare 2012

C. Wueest, Targeted Attacks Against the Energy Sector, Symantec 2014

B. Miller; D. Rowe, A Survey of SCADA and Critical Infrastructure Incidents, SIGITE/RIIT 2012

Posted in Energy, History, Security.


Movie Review: JSA (Joint Security Area)

A South Korean soldier slowly hands a shiny mechanical lighter to a North Korean soldier, as if to give thanks through transfer of better technology. The North Korean lights a cigarette and contemplates the object. The South Korean clarifies its value as “you can see yourself in the reflection; see how clean your teeth are”. This movie is full of clever and humorous juxtapositions, similar to questioning urban liberal versus rural conservative values.

220px-Jsa.movistThe area known as JSA (Joint Security Area) is a small section of the Demilitarized Zone (DMZ) between North and South Korea. The two countries have their military stationed literally standing face-to-face just a few feet from each other. Buildings in the area have served as meeting space, brokered by international oversight, and there is palpable tension in the air.

This movie draws the viewer into this feeling and the lives of soldiers suspended by two countries within an old armistice and trying to find ways around it; men and women trapped inside an internationally monitored agreement to postpone hostilities. Primary roles are played by just four soldiers, two North and two South. Also stepping up to the dance are the investigators and observers, positioned in an awkward third role between the two sides.

The NNSC (Neutral Nations Supervisory Commission) and the US have a dominant secondary tier of influence to the dialogue. I found no mention of other global players, such as China or Russia. Perhaps the absence of these countries is explained by the fact this movie was released in 2000. Today it might be a different story.

Directed by Park Chan-wook the cultural perspective and references clearly are South Korean. North Korea is portrayed in a surprising light as the more thoughtful and grounded of the two countries. While the South is shown to be obsessed with shallow perfections, looking at itself and boasting about false success, roles played by the North are either weary and wise or kind and naive. It is the US and UN that come out being the real villains in the script, perpetuating a civil war that would heal if only allowed by outside meddlers.

What comes across to me is a third-generation war movie; a Tarantino-syle M*A*S*H. There is a strong pacifist-irony thread, clearly influenced by Tarantino’s style of borrow and remix old scenes from popular war/gangster movies using today’s direct approach. No subtlety will be found. The viewer is granted displays of full-gore slow-motion blood-splattering scenes of useless death, the sort of lens Tarantino developed as he grew up working in a Los Angeles video-rental store. John Wayne, for example, is played by the North Korean sergeant…

Chan-wook is quoted saying his movies highlight “the utter futility of vengeance and how it wreaks havoc on the lives of everyone involved”.

Despite the gore and sometimes strained irony, the film is suspenseful and on-target with much of its commentary. It offers a counter-intuitive story that veers uncomfortably close to glorifying the North and vilifying the US, delivering over-simplifications of civil war. This is exactly the sort of popular cartoonist perspective many of us need to take into consideration, forcing a rethink of how “the dark side” is portrayed. If Marvel were to dream up a superhero of South Korean origin, it may have more shades of this plot than anything a US director would ever allow.

I give it four out of an unspecified number of penguins.

Posted in Security.


A Political Science TL;DR for InfoSec

More and more often I see those experienced in technology address issues of political science. A malware reverser will speculate on terrorist motives. An expert with network traffic analysis will make guesses about organized crime operations.

When a journalist asks an expert in information security to explain the human science of an attack, such as groups and influences involved, the answers usually are quips and jabs instead of being based on science or study.

This is unfortunate because I suspect with a little reading or discussion things would improve dramatically. My impression is there is no clear guide, however, and when I raise the issue I’ve been asked to put something together. So, since I spent my undergraduate and graduate degrees on political philosophy (ethics of humanitarian intervention), perhaps I can help here in the ways that I was taught.

Let me give a clear example, which recently fell on my desk.

Say Silent Chollima One More Time

About two years ago a private company created by a wealthy businessman, using strong ties to the US government, was launched with ambitious goals to influence the world of information security investigations.

When 2013 kicked off CrowdStrike was barely known outside of inner-sanctum security circles. The stealth startup–founded by former Foundstone CEO, McAfee CTO, and co-author of the vaunted Hacking Exposed books George Kurtz–was essentially unveiled to the world at large at the RSA Security Conference in February.

Just two years after being formed, note how they describe the length of their projects, slyly claiming a start six years before existence.

Interviewer: What do you make of the FBI finding — and the president referred to it — that North Korea and North Korea alone was behind this attack?

CrowdStrike: At CrowdStrike, we absolutely agree with that. We have actually been tracking this actor. We actually call them Silent Chollima. That’s our name for this group based that is out of North Korea.

Interviewer: Say the name again.

Crowdstrike: Silent Chollima. Chollima is actually a national animal of North Korea. It’s a mythical flying horse. And we have been tracking this group since 2006.

Hold on to that mythical flying horse for a minute.

To be fair, CrowdStrike may have internally blended their own identity so much with the US government they do not realize those of us outside cringe when they blur the line between a CrowdStrike altar and our state. I think hiring many people away from the US government still does not excuse such casual use of “we” when speaking about intelligence from before the 2013 company launch.

Word use and definitions matter greatly to political scientists. I will dig in to explain why. Take a closer look at that reference to a mythological flying horse. CrowdStrike adds heavy emphasis where none is required. They want everyone to take note of what “we actually call” suspects without any sense of irony for propagandist methods.

Some of it may be just examples in insensitive or silly labeling for convenience, rather than outright propaganda designed to change minds. Here’s their “meet the adversaries” page.

animal-adversaries

Anyone else find it strange that the country of Tiger is an India? What is an India? Ok, seriously though, only Chollima gets defined? I have to look up what kind of animal an India is?

Iran (Persia) being called a Kitten and India the Tiger is surely meant as some light-hearted back-slapping comedy to lighten up the mood in CrowdStrike offices. Long nights poring over forensic material, might as well start filing with pejorative names for foreign indicators because, duh, adversaries.

Political scientists say the words used to describe a suspect before a trial heavily influence everyone’s views. An election also has this effect. Pakistan has some very interesting long-term studies of voting results from ballots for the illiterate, where candidates are assigned an icon.

Imagine a ballot where you have to choose between a chair and a kitten.

Vote for me!
Vote for me!

CrowdStrike makes no bones about the fact they believe someone they suspect will be considered guilty until proven innocent by others. This unsavory political philosophy comes through clearly in another interview (where they also take a moment to mention Chollima):

We haven’t seen the skeptics produce any evidence that it wasn’t North Korea, because there is pretty good technical attribution here. […] North Korea is one of the few countries that doesn’t have a real animal as a national animal. […] Which, I think, tells you a lot about the country itself.

I’ll leave the “pretty good technical attribution” statement alone here because I want to deal with that in a separate post. Let’s break the rest into two parts.

Skeptics Haven’t Produced Evidence

First, to the challenge for skeptics to produce counter-evidence, Bertrand Russell eloquently and completely destroyed such reasoning long ago. His simple celestial teapot analogy speaks for itself.

If I were to suggest that between the Earth and Mars there is a china teapot revolving about the sun in an elliptical orbit, nobody would be able to disprove my assertion provided I were careful to add that the teapot is too small to be revealed even by our most powerful telescopes. But if I were to go on to say that, since my assertion cannot be disproved, it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense.

This is the danger of ignoring lessons from basic political science, let alone its deeper philosophical underpinnings; you end up an information security “thought leader” talking absolute nonsense. CrowdStrike may as well tell skeptics to produce evidence attacks aren’t from a flying horse.

The burden of proof logically and obviously remains with those who sit upon an unfalsifiable belief. As long as investigators offer statements like “we see evidence and you can’t” or “if only you could see what we see” then the burden can not easily and so negligently shift away.

Perhaps I also should bring in the proper, and sadly ironic, context to those who dismiss or silence skepticism.

Studies of North Korean politics emphasize their leaders often justify total control while denying information to the public, silencing dissent and making skepticism punishable. In an RT documentary, for example, North Korean officers happily say they must do as they are told and they would not question authority because they have only a poor and partial view; they say only their dear leader can see all the evidence.

Skepticism should not be rebuked by investigators if they desire, as scientists tend to, for challenges to help them find truth. Perhaps it is fair to say CrowdStrike takes the very opposite approach of what we often call crowd source?

Analysts within the crowd who speak out as skeptics tend to be most practiced in the art of accurate thought, precisely because caution and doubt are not dismissed. Incompleteness is embraced and examined. This is explained with recent studies. Read, for example, a new study called “Psychology of Intelligence Analysis: Drivers of Prediction Accuracy in World Politics” that highlights how and why politics alter analyst conclusions.

Analysts also operate under bureaucratic-political pressure and are tempted to respond to previous mistakes by shifting their response thresholds. They are likelier to say “signal” when recently accused of underconnecting the dots (i.e., 9/11) and to say “noise” when recently accused of overconnecting the dots (i.e., weapons of mass destruction in Iraq). Tetlock and Mellers (2011) describe this process as accountability ping-pong.

Then consider an earlier study regarding what makes people into and “superforecasters” when they are accountable to a non-political measurement.

…accountability encourages careful thinking and reduces self-serving cognitive biases. Journalists, media dons and other pundits do not face such pressures. Today’s newsprint is, famously, tomorrow’s fish-and-chip wrapping, which means that columnists—despite their big audiences—are rarely grilled about their predictions after the fact. Indeed, Dr Tetlock found that the more famous his pundits were, the worse they did.

CrowdStrike is as famous as they get, as designed from launch. Do they have any non-political, measured accountability?

Along with being skeptical, analysts sometimes are faulted for being grouchy. It turns out in other studies that people in bad moods remember more detail in investigations and provide more accurate facts, because they are skeptical. The next time you want to tell an analyst to brighten up, think about the harm to the quality of their work.

Be skeptical if you want to find the right answers in complex problems.

A Country Without a Real Animal

Second, going back to the interview statement by CrowdStrike, “one of the few countries” without “a real animal as a national animal” is factually easy to confirm. It seems most obviously false.

With a touch of my finger I find mythical national animals used in England, Scotland, Bhutan, China, Greece, Hungary, Indonesia, Iran, Portugal, Russia, Turkey, Vietnam…and the list goes on.

Even if I try to put myself in the shoes of someone making this claim I find it impossible to see how use of national mythology could seem distinctly North Korean to anyone from anywhere else. It almost makes me laugh when I think this is a North Korean argument for false pride: “only we have a mythological national animal”.

The reverse also is awkward. Does anyone really vouch for a lack of a national animal for this territory? In those mythical eight years of CrowdStrike surveillance (or in real time, two years) did anyone notice, for example, some Plestiodon coreensis stamps or the animation starring Sciurus vulgaris and Martes zibellina?

And then, right off the top of my head I think of national mythology frequently used in Russia (two-headed monster) and England (monster being killed):

russiastgeorgedragon2s

And then the Houston Astros playing the Colorado Rockies pop into my head. Are we really supposed to cheer for a mythical mountain beast, some kind of anthropomorphic purple triceratops, or is it better to follow the commands of a green space alien with antennae? But I digress.

At this point I am unsure whether to go on to the second half of the CrowdStrke statement. Someone who says national mythical animals are unique to North Korea is in no position to assert it “tells you a lot about the country itself”.

Putting myself again in their shoes, CrowdStrike may think they convey “fools in North Korea have false aspirations; people there should be more skeptical”.

Unfortunately the false uniqueness claim makes it hard to unravel who the fools really are. A little skepticism would have helped CrowdStrike realize mythology is universal, even at the national level. So what do we really learn when a nation has evidence of mythology?

In my 2012 Big Data Security presentations I touched on this briefly. I spoke to risks of over-confidence and belief in data that may undermine further analytic integrity. My example was the Griffin, a mythological animal (used by the Republic of Genoa, not to mention Greece and England).

Recent work by an archeologist suggests these legendary monsters were a creative interpretation by Gobi nomads of Protocerotops bones. Found during gold prospecting the unfamiliar bones turned into stories told to those they traded with, which spread further until many people were using Griffins in their architecture and crests.

Ok, so really mythology tells us that people everywhere are creative and imaginative with minds open to possibilities. People are dreamers and have aspirations. People stretch the truth and often make mistakes. The question is whether at some point a legend becomes hard or impossible to disprove.

A flying horse could symbolize North Koreans are fooled by shadows, or believe in legends, but who among us is not guilty of creativity to some degree? Creativity is the balance to skepticism and helps open the mind to possibilities not yet known or seen. It is not unique to any state but rather essential to the human pursuit of truth.

Be creative if you want to find the right answers in complex problems.

Power Tools

Intelligence and expertise in security, as you can see, does not automatically transfer to a foundation for sound political scientific thought. Scientists often barb each other about who has more difficult challenges to overcome, yet there are real challenges in everything.

I think it important to emphasize here that understanding human behavior is very different skill. Not a lessor skill, a different one. XKCD illustrates how a false or reverse-confidence test is often administered:

XKCD Imposter

Being the best brain surgeon does not automatically make someone an expert in writing laws any more than a political scientist would be an expert at cutting into your skull.

Basic skills in everything can be used to test for fraud (imposter exams) while the patience in more nebulous and open advanced thinking in every field can be abused. Multiplication tables for math need not be memorized because you can look them up to find true/false. So too with facts in political science, as I illustrated with mythology and symbolism for states. Quick, what’s your state animal?

Perhaps it is best said there are two modes to everything: things that are trivial and things that are not yet understood. The latter is what people mean when they say they have found something “sophisticated”.

There really are many good reasons for technical experts to quickly bone up on the long and detailed history of human science. Not least of them is to cut down propaganda and shadows, move beyond the flying horses, and uncover the best answers.

The examples I used above are very specific to current events in order to clarify what a problem looks like. Hopefully you see a problem to be solved and now are wondering how to avoid a similar mistake. If so, now I will try to briefly suggest ways to approach questions of political science: be skeptical, be creative. Some might say leave it to the professionals, the counter-intelligence experts. I say never stop trying. Do what you love and keep doing it.

Achieving a baseline to parse how power is handled should be an immediate measurable goal. Can you take an environment, parse who the actors are, what groups they affiliate with and their relationships? Perhaps you see already the convenient parallels to role based access or key distribution projects.

Aside from just being a well-rounded thinker, learning political science means developing powerful analytic tools that quickly and accurately capture and explain how power works.

Being Stateful

Power is the essence of political thought. The science of politics deals with understanding systems of governing, regulating power, of groups. Political thinking is everywhere, and has been forever, from the smallest group to the largest. Many different forms are possible. Both the framework of the organization and leadership can vary greatly.

Some teach mainly about relationships between states, because states historically have been a foundation to generation of power. This is problematic as old concepts grow older, especially in IT, given that no single agreed-upon definition of “state” yet exists.

Could users of a service ever be considered a state? Google might be the most vociferously and openly opposed to our old definitions of state. While some corporations engage with states and believe in collaboration with public services, Google appears to define state as an irrelevant localized tax hindering their global ambitions.

A major setback to this definition came when an intruder was detected moving about Google’s state-less global flat network to perpetrate IP theft. Google believed China was to blame and went to the US government for services; only too late the heads of Google realized state-level protection without a state affiliation could prove impossible. Here is a perfect example of Google engineering anti-state theory full of dangerous presumptions that court security disaster


google-domination

A state is arguably made up of people, who govern through representation of their wants and needs. Google sees benefits in taking all the power and owing nothing in return, doing as they please because they know best. An engineer that studied political science might quickly realize that removing ability for people to represent themselves as a state, forced to bend at the whim of a corporation, would be a reversal in fortune rather than progress.

It is thus very exciting to think how today technology can impact definitions for group membership and the boundaries of power. Take a look at an old dichotomy between nomadic and pastoral groups. Some travel often, others stay put. Now we look around and see basic technology concepts like remote management and virtual environments forcing a rethink of who belongs to what and where they really are at any moment in time.

Perhaps you remember how Amazon wanted to provide cloud services to the US government under ITAR requirements?

Amazon Web Services’ GovCloud puts federal data behind remote lock and key

The question of maintaining “state” information was raised because ITAR protects US secrets by requiring only citizens have access. Rather than fix the inability for their cloud to provide security at the required level, a dedicated private datacenter was created where only US citizens had keys. Physical separation. A more forward-thinking solution would have been to develop encryption and identity management solutions that avoided breaking up the cloud, while still complying with requirements.

This problem came up again in reverse when Microsoft was told by the US government to hand over data in Ireland. Had Microsoft built a private-key solution, linked to the national identity of users, they could have demonstrated an actual lack of access to that data. Instead you find Microsoft boasting to the public that state boundaries have been erased, your data moves with you wherever you go, while telling the US government that data in Ireland can’t be accessed.

Being stateful is not just a firewall concern, it really has roots in political science.

An Ominous Test

Does the idea of someone moving freely scare you more or a person who digs in for the long haul and claims proof of boundary violations where you see none?

Whereas territory used to be an essential characteristic of a state, today we wonder what membership and presence means when someone can remain always connected, not to mention their ability to roam within overlapping groups. Boundaries may form around nomads who carry their farms with them (i.e. playing FarmVille) and of course pastoralism changes when it moves freely without losing control (i.e. remote management of a Data Center).

Technology is fundamentally altering the things we used to rely upon to manage power. On the one hand this is of course a good thing. Survivability is an aim of security, reducing the impact of disaster by making our data more easily spread around and preserved. On the other hand this great benefit also poses a challenge to security. Confidentiality is another aim of security, controlling the spread of data and limiting preservation to reduce exposure. If I can move 31TB/hr (recent estimate) to protect data from being destroyed it also becomes harder to stop giant ex-filtration of data.

From an information security professional’s view the two sides tend to be played out in different types of power and groups. We rarely, if ever, see a backup expert in the same room as a web application security expert. Yet really it’s a sort of complicated balance that rests on top of trust and relationships, the sort of thing political scientists love to study.

With that in mind, notice how Listverse plays to popular fears with a top ten “Ominous State-Sponsored Hacker Group” article. See if you now, thinking about a balance of power between groups, can find flaws in their representation of security threats.

It is a great study. Here are a few questions that may help:

  • Why would someone use “ominous” to qualify “state-sponsored” unless there also exist non-ominous state-sponsored hacker groups?
  • Are there ominous hacker groups that lack state support? If so, could they out-compete state-sponsored ones? Why or why not? Could there be multiple-affiliations, such that hackers could be sponsored across states or switch states without detection?
  • What is the political relationship, the power balance, between those with a target surface that gives them power (potentially running insecure systems) and those who can more efficiently generate power to point out flaws?
  • How do our own political views affect our definitions and what we study?

I would love to keep going yet I fear this would strain too far the TL;DR intent of the post. Hopefully I have helped introduce someone, anyone (hi mom!), to the increasing need for combined practice in political science and information security. This is a massive topic and perhaps if there is interest I will build a more formal presentation with greater detail and examples.

Updated 19 January: added “The Psychology of Intelligence Analysis” citation and excerpt.

Posted in Security.


The Beginning Wasn’t Full-Disclosure

An interesting personal account of vulnerability disclosure called “In the Beginning There was Full Disclosure” makes broad statements about the past.

In the beginning there was full disclosure, and there was only full disclosure, and we liked it.

I don’t know about you, but immediately my brain starts searching for a date. What year was this beginning?

No dates are given, only clues.

First clue, a reference to RFP.

So a guy named Rain Forest Puppy published the first Full Disclosure Policy promising to release vulnerabilities to vendors privately first but only so long as the vendors promised to fix things in a timely manner.

There may be earlier versions. The RFP document doesn’t have a date on it, but links suggest 2001. Lack of date seems a bit strange for a policy. I’ll settle on 2001 until another year pops up somewhere.

Second clue, vendors, meaning Microsoft

But vendors didn’t like this one bit and so Microsoft developed a policy on their own and called it Coordinated Disclosure.

This must have been after the Gates’ memo of 2002.

Both clues say the beginning was around 2000. That seems odd because software-based updates in computers trace back to 1968.

It also is odd to say the beginning was a Microsoft policy called Coordinated Disclosure. Microsoft says they released that in 2010.

Never mind 2010. Responsible disclosure was the first policy/concept at Microsoft because right after the Gates’ memo on security they mention it in 2003, discussing how Tavis Ormandy decided unilaterally to release a 0day on XP.

Thus all of the signals, as I dug through the remainder of the post, suggest vulnerability research beginning around 15 years ago. To be fair, the author gives a couple earlier references:

…a debate that has been raging in security circles for over a hundred years starting way back in the 1890s with the release of locksmithing information. An organization I was involved with, L0pht Heavy Industries, raised the debate again in the 1990’s as security researchers started finding vulnerabilities in products.

Yet these are too short a history (1890s wasn’t the first release of locksmith secrets) and not independent (L0pht takes credit for raising the debate around them) for my tastes.

Locksmith secrets are thousands of years old. Their disclosure follows. Pin-tumblers get called Egyptian locks because that’s where they are said to have originated; technically the Egyptians likely copied them out of Mesopotamia (today Iraq). Who believes Mesopotamia was unhappy their lock vulnerabilities were known? And that’s really only a tip of the iceberg for thousands of years disclosure history.

I hear L0pht taking credit again. Fair point. They raised a lot of awareness while many of us were locked in dungeons. They certainly marketed themselves well in the 1990s. No question there. Yet were they raising the debate or joining one already in progress?

To me the modern distributed systems debate raged much, much earlier. The 1968 Carterphone case, for example, ignited a whole generation seeking boundaries for “any lawful device” on public communication lines.

In 1992 Wietse Venema appeared quite adamant about the value of full disclosure, as if trying to argue it needs to happen. By 1993 he and Dan Farmer published the controversial paper “Improving the security of your site by breaking into it“.

They announced a vulnerability scanner that would be made public. It was the first of its kind. For me this was a turning point in the industry, trying to justify visibility in a formal paper and force open discussion of risk within an environment that mostly had preferred secret fixes. The public Emergency Response and Incident Advisory concepts still meant working with vendors on disclosure, which I will get to in a minute.

As a side-note the ISS founder claims to have written an earlier version of the same vulnerability scanner. Although possible, so far I have found nothing outside his own claims to back this up. SATAN has free and far wider recognition (i.e. USENIX paper) and also easily was found running in the early 1990s. I remember when ISS first announced in the mid 1990s, it appeared to be a commercial version of SATAN that did not even try to distinguish or back-date itself.

But I digress. Disclosure of vulnerabilities in 1992 felt very controversial. Those I found were very hush and the steeped ethical discussions of exposing weakness were clearly captured in Venema/Farmer paper. There definitely was still secrecy and not yet a full-disclosure climate.

Just to confirm I am not losing my memory, I ran a few searches on an old vulnerability disclosure list, the CIAC. Sure enough, right away I noticed secretive examples. January 4, 1990 Texas Instruments D3 Process Control System gives no details, only:

TI Vuln Disclosure

Also in January 1990, Apple has the same type of vulnerability notice.

Even more to the point, and speaking of SATAN, I also noticed HP using a pre-release notice. This confirms for me my memory isn’t far off; full disclosure was not a norm. HP issues a notice before the researcher made the vulnerabilities public.

HP SATAN

Vendors shifted how they respond not because a researcher released a vulnerability under pride of full disclosure, which a vendor had powerful legal and technical tools to dispute. Rather SATAN changed the economics of disclosure by making the discussion with a vendor about self-protection through awareness first-person and free.

Anyone could generate a new report, anywhere, anytime so the major vendors had to contemplate the value of responding to an overall “assessment” relative to other vendors.

Anyway, great thoughts on disclosure from the other blog, despite the difference on when and how our practices started. I am ancient in Internet years and perhaps more prone than most to dispute historic facts. Thus I encourage everyone to search early disclosures for further perspective on a “Beginning” and how things used to run.

Updates:

@ErrataRob points out SATAN was automating what CERT had already outed, the BUGTRAQ mailing list (started in 1993) was meant to crowd-source disclosures after CERT wasn’t doing it very well. Before CERT people traded vulns for a long time in secret. CERT made it harder, but it was BUGTRAQ that really shutdown trading because it was so easy to report.

@4Dgifts points out discussion of vulns on comp.unix.security USENET news started around 1984

@4Dgifts points out a December 1994 debate where the norm clearly was not full-disclosure. The author even suggests blackhats masquerade as whitehats to get early access to exploits.

All that aside, it is not my position to send out full disclosure, much as I might like to. What I sent to CERT was properly channeled through SCO’s CERT contact. CERT is a recognized and official carrier for such materials. 8LGM is, I don’t know, some former “black hat” types who are trying pretty hard to wear what looks like a “white hat” these days, but who can tell? If CERT believes in you then I assume you’ll be receiving a copy of my paper from them; if not, well, I know you’re smart enough to figure it out anyway.

[…]

Have a little patience. Let the fixed code propagate for a while. Give administrators in far off corners of the world a chance to hear about this and put up defenses. Also, let the gory details circulate via CERT for a while — just because SCO has issued fixes does not mean there aren’t other vendors whose code is still vulnerable. If you think this leaves out the freeware community, think again. The people who maintain the various login suites and other such publically available utilities should be in contact with CERT just as commercial vendors are; they should receive this information through the same relatively secure conduits. They should have a chance to examine their code and if necessary, distribute corrected binaries and/or sources before disclosure. (I realize that distributing fixed sources is very similar to disclosure, but it’s not quite the same as posting exploitation scripts).

Posted in History, Security.


US President Calls for Federal 30-day Breach Notice

Today the US moved closer to a federal consumer data breach notification requirement (healthcare has had a federal requirement since 2009 — see Eisenhower v Riverside for why healthcare is different from consumer).

PC World says a presentation to the Federal Trade Commission sets the stage for a Personal Data Notification & Protection Act (PDNPA).

U.S. President Barack Obama is expected to call Monday for new federal legislation requiring hacked private companies to report quickly the compromise of consumer data.

Every state in America has had a different approach to breach deadlines, typically led by California (starting in 2003 with SB1386 consumer breach notification), and more recently led by healthcare. This seems like an approach that has given the Feds time to reflect on what is working before they propose a single standard.

In 2008 California moved to a more aggressive 5-day notification requirement for healthcare breaches after a crackdown on UCLA executive management missteps in the infamous Farah Fawcett breaches (under Gov Schwarzenegger).

California this month (AB1755, effective January 2015, approved by the Governor September 2014) relaxed its healthcare breach rules from 5 to 15 days after reviewing 5 years of pushback on interpretations and fines.

For example, in April 2010, the CDPH issued a notice assessing the maximum $250,000 penalty against a hospital for failure to timely report a breach incident involving the theft of a laptop on January 11, 2010. The hospital had reported the incident to the CDPH on February 19, 2010, and notified affected patients on February 26, 2010. According to the CDPH, the hospital had “confirmed” the breach on February 1, 2010, when it completed its forensic analysis of the information on the laptop, and was therefore required to report the incident to affected patients and the CDPH no later than February 8, 2010—five (5) business days after “detecting” the breach. Thus, by reporting the incident on February 19, 2010, the hospital had failed to report the incident for eleven (11) days following the five (5) business day deadline. However, the hospital disputed the $250,000 penalty and later executed a settlement agreement with the CDPH under which it agreed to pay a total of $1,100 for failure to timely report the incident to the CDPH and affected patients. Although neither the CDPH nor the hospital commented on the settlement agreement, the CDPH reportedly acknowledged that the original $250,000 penalty was an error discovered during the appeal process, and that the correct calculation of the penalty amount should have been $100 per day multiplied by the number of days the hospital failed to report the incident to the CDPH for a total of $1,100.

It is obvious too long a timeline hurts consumers. Too short a timeline has been proven to force mistakes with covered entities rushing to conclusion then sinking time into recovering unjust fines and repairing reputation.

Another risk with too short timelines (and complaint you will hear from investigation companies) is that early-notification reduces good/secret investigations (e.g. criminals will erase tracks). This is a valid criticism, however it does not clearly outweigh benefits to victims of early notification.

First, a law-enforcement delay caveat is meant to address this concern. AB1755 allows a report to be submitted 15 days after the end of a law-enforcement imposed delay period, similar to caveats found in prior requirements to assist important investigations.

Second, we have not seen huge improvements in attribution/accuracy after extended investigation time, mostly because politics start to settle in. I am reminded of when Walmart in 2009 admitted to a 2005 breach. Apparently they used the time to prove they did not have to report credit card theft.

Third, value relative to the objective of protecting data from breach. Consider the 30-day Mandiant 2012 report for South Carolina Department of Revenue. It ultimately was unable to figure out who attacked (although they still hinted at China). It is doubtful any more time would have resolved that question. The AP has reported Mandiant charged $500K or higher and it also is doubtful many will find such high costs justified. Compare their investigation rate with the cost of improving victim protection:

Last month, officials said the Department of Revenue completed installing the new multi-password system, which cost about $12,000, and began the process of encrypting all sensitive data, a process that could take 90 days.

I submit to you that a reasonably short and focused investigation time saves money and protects consumers early. Delay for private investigation brings little benefit to those impacted. Fundamentally who attacked tends to be less important that how a breach happened; determining how takes a lot less time to investigate. As an investigator I always want to get to the who, yet I recognize this is not in the best interest of those suffering. So we see diminishing value in waiting, increased value in notification. Best to apply fast pressure and 30 days seems reasonable enough to allow investigations to reach conclusive and beneficial results.

Internationally Singapore has the shortest deadline I know of with just 48-hours. If anyone thinks keeping track of all the US state requirements has been confusing, working globally gets really interesting.

Update, Jan 13:

Brian Krebs blogs his concerns about the announcement:

Leaving aside the weighty question of federal preemption, I’d like to see a discussion here and elsewhere about a requirement which mandates that companies disclose how they got breached. Naturally, we wouldn’t expect companies to disclose publicly the specific technologies they’re using in a public breach document. Additionally, forensics firms called in to investigate aren’t always able to precisely pinpoint the cause or source of the breach.

First, federal preemption of state laws sounds worse than it probably is. Covered entities of course want more local control at first, to weigh in heavily on politicians and set the rule. Yet look at how AB1755 in California unfolded. The medical lobby tried to get the notification moved from 5 days to 60 days and ended up on 15. A Federal 30 day rule, even where preemptive, isn’t completely out of the blue.

Second, disclosure of “how” a breach happened is a separate issue. The payment industry is the most advanced in this area of regulation; they have a council that releases detailed methods privately in bulletins. The FBI also has private methods to notify entities of what to change. Even so, generic bulletins are often sufficient to be actionable. That is why I mentioned the South Carolina report earlier. Here you can see useful details are public despite their applicability:

Mandiant Breach Report on SCDR

Obama also today is expected to make a case in front of the NCCIC for better collaboration between private and government sectors (Press Release). This will be the forum for this separate issue. It reminds me of the 1980s debate about control of the Internet led by Rep Glickman and decided by President Reagan. The outcome was a new NIST and the awful CFAA. Let’s see if we can do better this time.

Letters From the Whitehouse:

Posted in History, Security.


The (Secret) History of the Banana Split

If there is a quintessential American dessert it is the banana split. Why? Although we can credit Persians and Arabs with invention of ice-cream (nice try China) the idea of putting lots of ice-cream on a split banana covered in everything you can find but the kitchen sink…surely that is pure American innovation.

After reading many food history pages and mulling their facts a bit I realized something important was out of place. There had to be more to this story than just Americans love big things — all the fixings — and one day someone put everything together. Why America? When?

I found myself digging around for more details and eventually ended up with this official explanation.

In 1904 in Latrobe, the first documented Banana Split was created by apprentice pharmacist David Strickler — sold here at the former Tassell Pharmacy. Bananas became widely available to Americans in the late 1800s. Strickler capitalized on this by cutting them lengthwise and serving them with ice cream. He is also credited with designing a boat-shaped glass dish for his treat. Served worldwide, the banana split has become a prevalent American dessert.

The phrase that catches my eye, almost lost among the other boring details, is that someone with an ingredient “widely available…capitalized”; capitalism appears to be the key to unlock this history.

Immigration and Trade

The first attribution goes to Italian immigrants who brought spumoni to America around the 1870s. This three flavor ice-cream often was in the three colors of their home country’s flag. No problem for Americans. The idea of a three flavor treat was taken and adapted to local favorites: chocolate, strawberry and vanilla. Ice-cream became more widely available by the 1880s and experimentation was inevitable as competition boomed. It obviously was a very popular food by the 1904 St. Louis World’s Fair, which infamously popularized eating it with cones.

In parallel, new trade developments emerged. Before the 1880s there were few bananas found in America. America bought around $250K of bananas in 1871. Only thirty years later the imports had jumped 2,460% to $6.4m and were in danger of becoming too common. Bananas being both easily sourced and yet still exotic made them ideal for experiments with ice-cream. The dramatic change in trade and availability was the result of the corporate conglomerate formed in 1899 called United Fruit Company. I’ll explain more about them in a bit.

So what we’re talking about, at this point, is Persian/Arab ice-cream modified and brought by Italian immigrants to America, further modified in America and then dropped on a newly marketed corporate banana. All the fixings on top of a banana-split make perfect sense if you put yourself in the shoes of someone working in the soda/pharmacy business of 1904.

Bananas and Pineapples Were The New Thing

Imagine you’re in a drug-store and supposed to be offering something amazing or exotic to draw in customers. People could go to any drugstore. You pull out the hot new banana fruit, add the three most-popular flavors (impressive yet not completely unfamiliar) and then dump all the sauces you’ve got on top. You charge double the cost of any other dessert. Should you even add pineapple on top? Of course! You just started getting pineapple in a new promotion by the Dole corporation:

In 1899 James Dole arrived in Hawaii with $1000 in his pocket, a Harvard degree in business and horticulture and a love of farming. He began by growing pineapples. After harvesting the world’s sweetest, juiciest pineapples, he started shipping them back to mainland USA.

I have mentioned before on this blog how the US annexed Hawaii by sending in the Marines. Interesting timing, no? Food historians rarely bother to talk about this side of the equation, so indulge me for a moment. I sense a need for this story to be told.

The arrival of Mr. Dole to Hawaii in 1899, and a resulting sudden widespread availability of pineapples in drugstores for banana splits, is a dark chapter in American politics.

1890 American Protectionism and Hawaiian Independence

US Republican Congress in 1890 had approved a McKinley Tariff, which raised the cost of imports 40-50%. Although it left an exception for sugar it still removed Hawaii’s “favored status” and rewarded domestic production.

Only two years after the Tariff the sugar exports of Hawaii to America dropped 40% and threw their economy into shock. Sugar plantations run by white American businessmen quickly cooked up ideas to reinstate profits; removing Hawaii’s independence was their favored plan.

At the same time these businessmen conspired to remove Hawaiian independence, Queen Lili`uokalani took Hawaii’s throne and indicated she would reduce foreign interference, drafting a new constitution.

The two sides were headed for disaster in 1892 despite US government shifting dramatically to Democratic control (leading to the repeal of the McKinley Tariff in 1894). As Hawaii hinted towards more national control the foreign businessmen in Hawaii increasingly called on America for annexation.

An “uprising” in early 1893 forced the Queen to abdicate power to a government supported by the sugar growers. US Marines stormed the island under the premise of protecting American businessmen and this new government.

The nation’s fate seemed sealed, however it actually remained uncertain as a newly elected US President was openly opposed to imperialism and annexation. He even spoke of support for the Queen. Congressional pressure mounted and by 1897 the President seemed less likely to oppose annexation. Finally in 1898, given the war with Spain, Hawaii became of strategic importance and abruptly lost its independence definitively.

Few Americans I speak with realize the US basically sent Marines to annex Hawaii because increased profits for American plantation owners and lower cost of sugar for Americans, then sealed the annexation with war.

Total Control Over Fruit Sources

Anyway, remember Mr. Dole arriving in Hawaii in 1899 ready to start shipments of cheap pineapples? His arrival and success was a function of the annexation of an independent state; the creation of a pro-American puppet government to facilitate business and military interests. This is why drugstores in 1904 suddenly had ready access to pineapple.

And back to the question of bananas, the story is quite similar. The United Fruit Company quickly was able to establish US control over plantations in Columbia, Costa Rica, Cuba, Jamaica, Nicaragua, Panama, Santo Dominica and Guatemala.

Nearly half of Guatemala fell under control of the US conglomerate corporation, apparently, and yet no taxes had to be paid; telephone communications as well as railways, ports and ships all were owned by United Fruit Company. The massive level of US control initially was portrayed as an investment and benefit to locals, although hindsight has revealed another explanation.

“As for repressive regimes, they were United Fruit’s best friends, with coups d’état among its specialties,” Chapman writes. “United Fruit had possibly launched more exercises in ‘regime change’ on the banana’s behalf than had even been carried out in the name of oil.” […] “Guatemala was chosen as the site for the company’s earliest development activities,” a former United Fruit executive once explained, “because at the time we entered Central America, Guatemala’s government was the region’s weakest, most corrupt and most pliable.”

Thus the term “banana republic” was born.

And while this phrase was meant to be pejorative and negative, it gladly was adopted in the 1980s by a couple Americans who traveled the world to blatantly steal clothing designs and resell as a “discovery”. This success at appropriation of ideas led to the big brand stores selling inexpensive clothes most people know today, found in most malls. The irony of the name surely has been lost on everyone.

So too with the desert. In other words the banana-split is a by-product or modern representation of America’s imperialist expansion and corporate-led brutal subjugation of freedoms in foreign nations, during the early 1900s.

Now you know the secret behind widespread availability of inexpensive ingredients that made a famous and iconic dessert possible.

Posted in Food, History, Security.


Linguistics as a Tool for Cyber Attack Attribution

My mother and I from 2006 to 2010 presented a linguistic analysis of the Advanced Fee Fraud (419 Scam).

One of the key findings we revealed (also explained in other blog posts and our 2006 paper) is that intelligence does not prevent someone from being vulnerable to simple linguistic attacks. In other words, highly successful and intelligent analysts have a predictable blind-spot that leads them to mistakes in attribution.

The title of the talk was usually “There’s No Patch for Social Engineering” because I focused on helping users avoid being lured into phishing scams and fraud. We had very little press attention and in retrospect instead of raising awareness in talks and papers alone (peer review model) we perhaps should have open-sourced a linguistic engine for detecting fraud. I suppose it depends on how we measure impact.

Despite lack of journalist interest, we received a lot of positive feedback from attendees: investigators, researchers and analysts. That felt like success. After presenting at the High-Tech Crimes Investigation Association (HTCIA) for example I had several ex-law enforcement and intelligence officers thank me profusely for explaining in detail and with data how intelligence can actually make someone more prone to misattribution, to fall victim to bias-laced attacks. They suggested we go inside agencies to train staff behind closed doors.

Recently the significance of the work has taken a new turn; I see a spike in interest on my blog post from 2012 coupled with news that linguistics are being used to analyze Sony attack attribution. Ironically the story is by a “journalist” at the NYT who blocked me on Twitter. I’m told by friends I was blocked because once I used a Modified Tweet (MT) to parody her headline.

Since long before the beginning of the Sony attack I have tried to raise the importance of linguistic analysis for attribution, as I tweeted here.

NSA, @Mandiant and @FireEye analysts say no English or bad grammar means u not no American

And then at the start of the Sony news on December 8, I tweeted a slide from our 2010 presentation. Also recently I tweeted

good analysis causes anti-herding behavior: “separates social biases introduced by prior ratings from true value”

Tweets unfortunately are disjointed and get far less audience than my blog posts so perhaps it is time to return to this topic here instead? I thus am posting the full presentation again:

RSAC_SF_2010_HT1-106_Ottenheimer.pdf

Look forward to discussing this topic further, as it definitely needs more attention in the information security community. Kudos to Jeffrey Carr for pursuing the topic and invitation to participate.

Updated to add: Perhaps it also would be appropriate here to mention my mother’s book called The Anthropology of Language: An Introduction to Linguistic Anthropology.anthropology of language

Ottenheimer’s authoritative yet approachable introduction to the field’s methodology, skills, techniques, tools, and applications emphasizes the kinds of questions that anthropologists ask about language and the kinds of questions that intrigue students. The text brings together the key areas of linguistic anthropology, addressing issues of power, race, gender, and class throughout. Further stressing the everyday relevance of the text material, Ottenheimer includes “In the Field” vignettes that draw you in to the chapter material via stories culled from her own and others’ experiences, as well as “Doing Linguistic Anthropology” and “Cross-Language Miscommunication” features that describe real-life applications of text concepts.

Posted in Security.


How the NSA Can Tell if You Are a Foreigner

For several years I have tried to speak openly about why I find it disappointing that analysts rely heavily (sometimes exclusively) on language to determine who is a foreigner.

Back in 2011 I criticized McAfee for their rather awful analysis of language.

They are making some funny and highly improbable assumptions: … The attackers used Chinese language attack tools, therefore they must be Chinese. This is a reverse language bias that brings back memories of L0phtCrack. It only ran in English.

Here’s the sort of information I have presented most recently for people to consider:

You see above the analysts tell a reporter that presence of a Chinese language pack is the clue to Chinese design and operation of attacks on Russia. Then further investigation revealed the source actually was Korea. Major error, no? It seems to be reported as only an “oops” instead of a WTF.

At a recent digital forensics and incident response (DFIR) meeting I pointed out that the switch from Chinese to Korean origin of attacks on Russia of course was a huge shift in attribution, one with potential connections to the US.

This did not sit well with at least one researcher in the audience. “What proof do you have there are any connections from Korea to the US” they yelled out. I assumed they were facetiously trying to see if I had evidence of an English language pack to prove my point.

In retrospect they may actually have been seriously asking me to offer clues why Korean systems attacking Russia might be linked to America. I regret not taking the time to explain what clues more significant than a language pack tend to look like. Cue old history lesson slides…but I digress.

Here’s another slide from the same talk I gave about attribution and language. I point to census data with the number and location of Chinese speakers in America, and most popular languages used on the Internet.

Unlike McAfee, mentioned above, FireEye and Mandiant have continued to ignore the obvious and point to Chinese language as proof of someone being foreign.

Consider for a moment that the infamous APT1 report suggests that language proves nothing at all. Here is page 5:

Unit 61398 requires its personnel to be…proficient in the English language

Thus proving APT1 are English-speaking and therefore not foreigners? No, wait, I mean proving that APT1 are very dangerous because you can never trust anyone required to be proficient in English.

But seriously, Mandiant sets this out presumably to establish two things.

First, “requires to be proficient” is a subtle way to say Chinese never will do better than “proficient” (non-native) because, foreigners.

Second, the Chinese target English-speaking victims (“Only two victims appear to operate using a language other than English…we believe that the two non-English speaking victims are anomalies”). Why else would the Chinese learn English except to be extremely targeted in their attacks — narrowing their focus to basically everywhere people speak English. Extremely targeted.

And then on page 6 of APT1 we see supposed proof from Mandiant of something else very important. Use of a Chinese keyboard layout:

…the APT1 operator’s keyboard layout setting was “Chinese (Simplified) – US Keyboard”

On page 41 (suspense!) they explain why this matters so much:

…Simplified Chinese keyboard layout settings on APT1’s attack systems, betrays the true location and language of the operators

Mandiant gets so confident in where someone is from based on assessing language they even try to convince the reader that Americans do not make grammar errors. Errors in English (failed attempts at proficiency) prove they are dealing with a foreigner.

Their own digital weapons betray the fact that they were programmed by people whose first language is not English. Here are some examples of grammatically incorrect phrases that have made it into APT1’s tools

It is hard to believe this is not meant as a joke. There is a complete lack of linguistic analysis, for example, just a strange assertion about proficiency. In our 2010 RSAC presentation on the linguistics of threats we give analysis of phrases and show how syntax and spellings can be useful to understand origins. I can only imagine what people would have said if we tried to argue “Bad Grammar Means English Ain’t Your First Language”.

Of course I am not saying Mandiant or others are wrong to have suspicion of Chinese connections when they find some Chinese language. Despite analysts wearing clothes with Chinese language tags and using computers that probably have Chinese language print there may be some actual connections worth investigating further.

My point is that the analysis offered to support conclusions has been incredibly weak, almost to the point of being a huge distraction from the quality in the rest of the reports. It makes serious work look absurd when someone over-emphasizes language spoken as proof of geographic location.

Now, in some strange twist of “I told you so”, the Twittersphere has come alive with condemnation of an NSA analyst for relying to heavily on language.

Thank you to Chris and Halvar and everyone else for pointing out how awful it is when the NSA does this kind of thinking; please also notice how often it happens elsewhere.

More people need to speak out against this generally in the security community on a more regular basis. It really is far too common in far too many threat reports to be treated as unique or surprising when the NSA does it, no?

Posted in History, Security.