Category Archives: Security

Steam Car for Sale

An auction tomorrow will be for a four-seater steam “quadricycle” with a range of 20 miles on 40 gallons of water — the 1884 De Dion Bouton Et Trepardoux Dos-A-Dos Steam Runabout.

De Dion’s little quadricycle can claim to be the first family car, despite its arcane power source. What makes it different from road-going locomotives dating back to Cugnot’s 1770 tractor is its sophisticated boiler, which can be steamed in 45 minutes. It is also compact at only nine feet long and relatively light at 2,100 pounds. But, it has four wheels, seats four, and can be driven by one person — like a modern car.

Steam Car

One of the oldest still functioning vehicles, and a promising early design, but it is said to have been expensive even back in 1884.

By 1889 you could buy a tricycle for 2,800 francs ($540) and a quadricycle for 4,400 francs ($850).

Those prices were certainly out of the reach for the average enthusiast, when a French laborer might make five francs a day, and sales were confined to the very rich.

Hmmm, 5 francs a day x 365 days = 1825 francs. So a tricycle would be double an annual salary. An American laborer might make $120 a day x 365 days = $43,800. So a car today, in relative terms, is about half the price of one “confined to the very rich” in the 1890s? That’s like saying a $60,000 car today is confined to the very rich. Am I missing something?

Price was surely a factor but it seems the real reason for demise was the allure of gasoline.

By 1893 gasoline was the up-and-coming power source, and steam devotee Trepardoux left the firm and presumably went back to toys. A celebrated duelist and ladies’ man, De Dion was keen on animal welfare and made a few large steam trucks in an effort to free horses from hauling heavy carts, and then he and Bouton focused on gasoline automobiles. They patented their transmission in 1895 and dominated the early years of the 20th century, with De Dion engines powering some of the first great marques, like Renault, Pierce-Arrow and Delage.

Betfair’s Gamble on Disclosure

Nearly four million records were stolen last year, apparently even encryption keys, from the fast-growing gambling company. The Telegraph reports they were forced to report it to law enforcement, partners and regulators

The theft was so serious that Betfair was forced to inform the UK’s Serious Organised Crime Agency (SOCA), the Australian Federal Police and German law enforcement officials. It also notified the UK Gambling Commission and the Maltese Lotteries and Gaming Authority, as well as Royal Bank of Scotland, its “acquiring bank” – the lender responsible for accepting credit and debit card payments made via Betfair.

They did not, however, report it to the owners of the records who would be impacted

Its July report to regulators states it had decided there was no reason to inform its customers, after taking advice from SOCA that “public disclosure would be detrimental to any intelligence operation or investigation”.

The argument for not disclosing the breach to customers supposedly hinged on a little detail about whether sensitive track data was exposed.

“We have taken the prudent view that the criminal has the expertise to decrypt the payment card details,” Betfair admitted, though stressed that the “CVV2/CVC security numbers” were not stolen.

It said advice from RBS was that “this very significantly limits the ability of the cards to be used fraudulently”.

That’s nonsense, of course. If it were so hard to use data fraudulently then why was it encrypted in the first place? The PCI DSS wouldn’t be so strict about encryption and clean destruction of it if the RBS argument about “significantly limits” were true. We are talking about RBS, another company infamous for weak security, I have to point out.

CVV2/CVC were not present because it is strictly prohibited from being stored, but it’s not like the card brands say go ahead and let the rest of the data float around. More to the point, criminals make fraudulent use of cards all the time without the CVV2/CVC.

It’s a story to make many people upset, surely, but here’s a little humorous twist in the details. They only discovered the breach when a server that should have been used for monitoring for breaches crashed two months after the breach started.

The first Betfair knew of the theft was when a “production log server” crashed in its Malta data centre on May 20 – more than two months after the initial breach. That led to the discovery that “at least nine servers [had] been compromised in the UK and two in Malta”.

Hey, someone check the log server. It stopped responding. Oh, well look at that, the logs say we have been breached for a while.

That might scare some executives into proceeding with caution, but Betfair not only took a gamble by not disclosing the breach to customers, they then took an even bigger gamble — going public while faced with serious operational deficiencies.

Just a month before the decision to press ahead with the float, Betfair had received a “Forensic Investigation Report” on the cyber theft from security consultancy Information Risk Management (IRM).

Its first conclusion was that: “Appropriate information security governance is not in place within Betfair and as a consequence the business has been exposed to significant risks.”

Another one? That “appropriate technical controls relating to such elements as network segregation and file integrity monitoring that would provide Betfair the ability to deter, prevent and detect such an incident are not in place”.

Now we can watch and see how the gambles work out for them.

How (not) to Fail an Audit

As I have written here several times before, like in a post on accepting mistakes to reduce their frequency, I am a big fan of the phrase “fail faster”.

Many years ago when I was Director of IT and Security at a very large enterprise I was fond of saying “fail faster” to my staff. I wanted them to feel comfortable with the idea that they should focus on always improving. The CIO was not fond of this and constantly asked me why a Director of Security, of all people, would encourage failure?

I could give a hundred examples (sports, martial arts, arts, etc.) where a perfect score is not only unlikely but self-defeating. This was familiar to some, but others still tried to prove to me that “only first place matters” and failures always should be downplayed or obscured. My fear was that their behavior was a slippery slope to fraud. Their concern was that my behavior was demotivating.

Today a colleague read a post called “Fail a Security Audit Already — it’s Good for You” and asked me if this means QSAs are too soft on their clients. The author gives this analysis:

If the audit is a stress-test of your environment that helps you find the weaknesses before a real attack, you should be failing audit every now and then. After all, if you’re not failing any audits there are two possible explanations:

1) You have perfect security.

2) You’re not trying hard enough.

I disagree and will try to explain why it’s different in this case. The author clearly is not speaking from the auditor perspective. You don’t want to tell companies to fail a PCI DSS audit. It’s a subtractive system. A company does not get a pass until they have removed all areas of remediation or compensation and can prove that things are running smoothly on an ongoing basis. The following paragraph has a strange depiction of the audit process.

Companies should be failing audits, whether internal or external, far more often than they suffer breaches. The fact that few companies are failing any audits should be cause for concern, not celebration.

How exactly has the author concluded the “fact” that few companies are failing audits? As a long-time auditor I find companies trying to pass audits far more often than they are being breached. I would call this reviewing test results and remediation in order to pass an audit.

And what celebration is the author talking about? When an auditor leaves a passing score there typically is a sigh of relief, not celebration. I am tempted to suggest this to a restaurant. Next time the health inspector gives them a passing score I will ask them to serve free cake and champagne. Probably won’t fly. I suspect there is no evidence of celebration.

The motto of fail faster works for rapid development for improvement but “trying” to fail an audit or an exam is bad advice to give a company. It’s like saying your tachometer isn’t trying hard enough if it doesn’t fail every once and a while to tell you the correct RPM. Or that you aren’t a good driver if you aren’t trying to fail your license test. Imagine if auditors tried to fail their certification test to prove that they were really trying hard to understand the regulations.

The decision of when to try and fail is nuanced. It can be confusing, which goes back to the reason the CIO cautioned me about motivation and interpretation. There are some things you want to fail and measure frequently (e.g. practice runs, tests) and things you don’t want to fail (e.g. final exams). The CSO article does not make this important distinction, and does not mention that you should consider the consequences of failure, when it tells you to fail. When we limit our definition of an audit to something like a formal audit (the final Report on Compliance to the Payment Card Industry Security Standards Council) then it is not good advice to try and fail. You should try to pass, by failing faster.

Ex-Vormetric Execs Start High Cloud

Bill Hackenberger (VP of Engineering at Vormetric) and Steve Pate (CTO at Vormetric) quit the company in 2009 and have now started…an encryption company. Steve Pate also claims to have been a founder of HyTrust, which could explain why they have named their new company High Cloud.

They are offering “early access to a Beta version of our solution” (early Beta = Alpha?) so they are far from ready for prime-time, but they appear to be in the right mindset and offer a variation of proxy architecture, similar to HyTrust. Here is a diagram presented by the CTO in 2008 that has a dedicated/physical key management server.

They list the capabilities that auditors have been asking for from cloud providers for years…the following functionality, for example, maps to some of the old text of PCI DSS compliance requirements.

  • Selected elements of the VM are encrypted.
  • VMs are encrypted in storage, in transit, and in backups.
  • VMs are protected in the data center, outside when run on a remote server, or in the Cloud.
  • Keys are not visible to anyone.
  • Separation of duties guarantees that no single person can cause catastrophic damage.
  • Key rotation to satisfy regulatory bodies is performed automatically without the need to shut down the VM.

Although I have to say, the line “keys are not visible to anyone” is poorly written and suggests vaporware. I would have expected better given how long the founders have been in the industry and the text provided by regulatory bodies. Here are the PCI DSS Requirement 3.5 testing procedures, for reference.

  • 3.5.1 Examine user access lists to verify that access to keys is restricted to the fewest number of custodians necessary
  • 3.5.2.a Examine system configuration files to verify that keys are stored in encrypted format and that key-encrypting keys are stored separately from data-encrypting keys.
  • 3.5.2.b Identify key storage locations to verify that keys are stored in the fewest possible locations and forms.

The regulations will specify need-to-know, not invisible to anyone. I also noted a mistake in reference to the ISO requirements. It’s still early so maybe these issues will be worked out by the time they have a non-early Beta available.