Category Archives: Security

Time to Talk about APT

I posted a response to a Securosis blog post where they say this:

There’s a lot of hype in the press (and vendor pitches) about APT — the Advanced Persistent Threat. Very little of it is informed, and many parties within the security industry are quickly trying to co-opt the term in order to advance various personal and corporate agendas. In the process they’ve bent, manipulated and largely tarnished what had been a specific description of a class of attacker.

My response may show up sometime soon; in the meantime, here is what I said:

Excellent comments above. I agree with most here and just wanted to relate an interesting experience and three points about APT.

Some government officials met with me after my talk on security breaches at RSA 2010 in San Francisco. They laughed at me and said the word/acronym APT is hyped too much and misunderstood. They also gave me the “we know far more than anyone else about what is really going” story. I held back from being a smart-ass about all this posturing nonsense and instead asked for details.

First, I say worry less about use of language and words like APT. Clarity and understanding has a place/time — like a meeting where action is required. Public discussion is not that time. Absolute accuracy in language/definition during general conversation is really a straw-man argument — attack of a phrase or word instead of substance being put forward. We also could get upset about misuse of the word too versus to, the word hacker, the phrase critical infrastructure, etc. but open communication is never really clean. If you say car, you could mean just about anything, yet no one gets upset about car. Words get “bent, manipulated and largely tarnished” yet language works amazingly well. Cool, no? Or should I say that it’s hot? Move along please. If you struggle with APT you will really have a hard time with cloud.

Second, I agree completely that sharing APT info is better but I have seen two reasons used for controlled disclosure instead of openness.

A) Power and politics unfortunately sneak into this. The relatively immature and open field of play in Washington gives an incentive for sparse and sometimes unverifiable disclosures. Releasing information in a limited fashion can create a dramatic influence over the hill. Was it coincidence for example that during the debate regarding control and leadership for cybercommand the WSJ released a story that spies have infiltrated the US energy sector? A totally open discussion would not have had the same effect — reporters might have come to a different conclusion. Civilian leadership will lose control if the military and intelligence communities do not have more open discussion with them. Classic political science.

B) There is some chance that disclosure during an ongoing investigation could compromise its success. Only after the investigation is over should be made open to study. The questions are who gets to decide when a case is closed and how much should they share to whom? The guys I spoke with said they’ve been watching APT for over ten years. We talked about a few case examples and I realized they are stringing everything together — they would say the case is always open. I disagree with them in principle but more importantly I do not have any authority to make them close a case, disclose, and start new ones. I also can not easily parse who they trust and who they fear.

Third, check out the HTCIA. The audience for my presentations at the International Conference were almost all Peace Officers, Investigators and Prosecuting Attorneys. Discussions were less theoretical and more case/fact-based than your usual group. It’s a great place to share information on real attacks with fellow security professionals.

Titanic Secret Revealed

A descendant of a sailor on the Titanic claims to have revealed a secret about the infamous breach.

Investigators never found the truth — an error at the helm caused the initial hole, but the sinking allegedly resulted from a decision to continue sailing. The BBC News explains the rationale behind the secret:

Louise Patten’s grandfather decided not to disclose what he knew and even kept his story from an official enquiry into the sinking.

“By his code of honour, he felt it was his duty to protect his employer – White Star Line – and its employees,” Ms Patten said.

“It was made clear to him by those at the top that, if the company were found to be negligent, it would be bankrupted and every job would be lost.

“The enquiry had to be a whitewash. The only person he told the full story to was his beloved wife Sylvia, my grandmother.”

The story thus could be updated to include a self-serving and unaccountable company, and a captain who was negligent in handling the situation. I assume false pride or incorrect belief in the ship’s ability to survive impact is what led him to steam ahead.

However, I have a hard time believing the first part of the revealed secret.

Mrs Patten said the tragedy had occurred during a period when shipping communications were in transition from sail to steam.

Two different systems were in operation at the time, Rudder Orders (used for steam ships) and Tiller Orders (used for sailing ships).

Crucially, Mrs Patten said, the two steering systems were the complete opposite of one another, so a command to turn ‘hard a-starboard’ meant turn the wheel right under one system and left under the other.”

She said when the helmsman, who had been trained in sail, received the direction, he turned the vessel towards the iceberg with tragic results.

I have only been a passenger on vessels of massive size but they give feedback on direction steered. A hard turn would especially have made the boat shift. It could have been corrected unless the iceberg was so close it was too late anyway. It seems that if he could not detect and turn the other way in time (and not because of a design failure) then contact with the iceberg was likely to happen no matter what he did. Steering the wrong direction thus may have increased the damage before they corrected but not been entirely at fault.

Continuing to sail after impact is a much more shocking revelation and thus requiring investigation.

The breach, in other words, may have been impossible to avoid even if they had steered the right direction upon first warning. The decision to disregard the breach’s effect and push the ship ahead is what led to much greater disaster.

When a system is compromised it usually comes from a simple mistake; a service was left enabled, a weak cipher was used, etc.. This historic event illustrates why management should not continue to use a compromised system even if they believe it to be “unbreakable”. It also illustrates how accountability for customer security may be viewed by some managers.

ATM Russian Roulette

Triton has issued a security warning for their ATM based on an IOActive security assessment, which was presented at BlackHat earlier this year. It has seven “bullet” points that are very basic, illustrating the simplicity of managing risk versus the danger of ignoring it.

They also said this about the assessment:

Security is among Triton’s utmost concerns; strengthening our ATMs’ defenses is an on-going effort. The opportunity is welcomed to highlight the success of Triton’s continuing efforts to protect ATMs from emerging threats. Triton is hopeful that Mr. Jack’s work serves as a reminder to customers to be vigilant about installing software updates immediately upon release.

SOC1 (Service Organization Control 1) and SSAE 16 / SAS70

SAS 70 is over 18 years old and has begun to show its age. It was born before SOX or HIPAA existed, although not before COBIT. Two years ago the AICPA started looking at replacing SAS 70.

The result is SSAE 16, which must be used for any service auditor report that ends on or after June 15, 2010. The new reports on requirements for SSAE 16 get the title Service Organization Control 1 (SOC1).

You now need a SOC to have SOX.

SOC1 differs from SAS70 in the following four ways:

  • Focus: It only is meant to be used when a service organization affects the internal control over financial reporting (“ICFR”) for service users (e.g. tenants)
  • Risk basis: A service organization’s management will have to explain how all aspects of their services and control objectives are reasonable given the risk. They need to identify risks and related control objective in their description and explain how controls are deployed to mitigate the identified risks.
  • Period: The system description must cover the entire period of testing for operational effectiveness, rather than just the close of the period of operational effectiveness
  • Assertion: The report is an attestation standard rather than just audit. A service organization’s management will provide a detailed assertion for the auditors. This documented assertion is included with the SOC 1 report.

SOC1 is just the start. SOC2 comes next. Like a SAS 70 it intends to meet the need of customers with regard to governance over service organizations. Unlike a SAS 70 it is meant to address operations and risk outside the internal financial controls. Service providers, in other words, should use a SOC2 instead of SOC1. SOC 3 is a lighter version — lacks the detailed test results of a SOC 2 — meant for a general audience.

SOC2 is based on the Trust Services Criteria (previously known as SysTrust and WebTrust criteria). It will give guidance with a SAS70-like report and criteria/objectives, which controls should meet when they are put in place. It is meant to cover risk categories of Confidentiality, Integrity and Availability.