ID breach risk debated

ID Analytics has released a report that suggests ID theft from credit cards needs to be re-evaluated in terms of actual risk to consumers. Reuters picked up the report in today’s news and suggests that the report shows “where thieves access social security numbers and other sensitive information on consumers they have deliberately targeted — only about 1 in 1,000 victims had their identities stolen.” Reuters goes on to say:

After six months of study, comparing compromised information against credit applications, ID Analytics said it discovered something counterintuitive: The smaller the breach, the greater the likelihood the information was subsequently used by fraudsters to hijack the identity of victims.

“If you’re in a breach of 100, 200 or 250 names, there’s a pretty high probability that you’re identity is going to be used,” said Mike Cook, ID Analytics’ co-founder.

“The reason for that is if you look at how long it takes a fraudster to use an identity, they can roughly use 100 to 250 in a year. But as the size of the breach grows, it drops off pretty drastically.”

I do not think that is counter-intuitive at all. Small breaches are likely to be easily explained as directed attacks, as opposed to the more complicated investigation of a story that a box of tapes that have been misplaced. The Ford Motor Credit breach is an example of a massive breach that is both highly directed and that can continue giving consumers grief for many years after their data is stolen. So it is plausable that every “loss” will in fact be discovered to be a succesful attack or “breach” after the fact.

That being said, it is hard not to notice that Reuters claims first that “ID Analytics said it discovered that identity thieves have a hard time using a stolen credit cards to hijack the identity of cardholders because the cards are usually quickly canceled” and then they go on to conclude with a rather contradictory statement from the ID Analytics co-founder:

“As far as notifications, we think there are certain instances where businesses might want to notify consumers and certain instances where they might not to inform them,” said Cook. “For instance, if they lose data, and they don’t know where it is, we think too many notices may not be a good thing. They should probably monitor that and spend dollars on consumers who are actually harmed, rather than spending dollars on 10 million consumers” most of whom won’t be affected.

Where does the certainty come from? If you have lost the data, then presumably you have lost control of its use in the future. Who should be allowed to decide when it is safe to give up on an investigation and declare a “loss” as opposed to a “breach”? In addition, if notification and cancelling card numbers has made ID theft less easy, then why should you not notify all consumers when you have lost their data to ensure the maxium reduction of risk? Monitor to detect and catch a criminal I can understand, but I don’t follow the logic of “notification reduces risk when IDs are stolen, but you might not need to notify”. This reminds me of the theory that if trace amounts of a substance kills less than one in 1,000 customers than large companies might find it more profitable to just pay off the families who suffer rather than prevent the harm.

Perhaps the lack of clarity is because Reuters did not mention the “significant finding”, which can be found at the start of the report from ID Analytics:

A significant finding from the research is that different breaches pose different degrees of risk. In the research, ID Analytics distinguishes between “identity-levelâ€? breaches, where names and Social Security numbers were stolen and “account-levelâ€? breaches, where only account numbers—sometimes associated with names—were stolen. ID Analytics also discovered that the degree of risk varies based on the nature of the data breach, for example, whether the breach was the result of a deliberate hacking into a database or a seemingly unintentional loss of data, such as tapes or disks being lost in transit. […] Another key finding indicates that in certain targeted data breaches, notices may have a deterrent effect. In one large-scale identity-level breach, thieves slowed their use of the data to commit identity theft after public notification. The research also showed how the criminals who stole the data in the breaches used identity data manipulation, or “tumbling” to avoid detection and to prolong the scam.

Precisely. That makes perfect sense to me, as everyone wants a spectrum of risk to review not a black-or-white approach. And yet we should not forget that a vast majority of companies that house ID information still look at breaches from a “cup is half full” perspective and might not be in a suitable (expert) position to make intelligent decisions about the risks. Look at CardSystems, for example. The question is not whether people are trying to classify more levels of risk, but what is the quality of the data and their analysis — how qualified are today’s executives to make information security risk decisions on behalf of hundreds of thousands of consumers (assuming larger breaches will now automatically be determined to be lower risk)? Moreover, if you publicize reports that says huge breaches are lower risk and therefore exempt from breach disclosure, it seems obvious that clever criminals will simply shift to using huge breaches, no? That makes economic sense as well since a huge breach can be diversified to many criminals who will be able to commit ID theft more efficiently. They do not call it a black market for nothing, eh?

A SanDiego newspaper article from 2003 mentions that ID Analytics is a startup with “Citibank, Dell Financial Services, Sprint, Diners Club, Discover Financial Services and First North American National Bank as clients”. The company definitely seems to be a reputable source on the subject (“30 employees, including seven Ph.D.s”) with a timely business strategy that, according to the newspaper, almost could be an extension of the Payment Card Industry itself:

“I’m convinced they have a good product,” said Beth Givens of the Privacy Rights Clearinghouse in San Diego. “It’s just a shame that the financial industry didn’t take steps to fix these problems themselves.”

Givens contends that “credit issuers should have been spending more time on each credit application all along. And by more time, I mean, by two minutes.”

Hansen grimaced a bit when the gist of Givens’ criticism was recited.

He acknowledged that a financial institution might process as many as 10,000 credit applications an hour. But he says the industry also contends with a variety of challenges, including regulatory requirements to process each credit application within 30 days.

“What we’re trying to do is bring a technology solution to bear within the context of an automatic processing environment,” Hansen said.

In other news, Visa is asking assessors to re-certify, due to recent changes to the PCI data security standards, to the tune of over tens of thousands of dollars in training fees. That is a hefty chunk of money, even for veterans of PCI security, and believe it or not the standards are expected to change again in January 2006. Contrast this money-maker with the fact that Visa is giving away compliance scans for free outside the US and that the amount of credit card fraud has dropped dramatically over the past ten years due mainly to additional card verification measures (from billions of dollars to the low hundreds of millions). Fascinating stuff and a very exciting time to be in information security.

Dec 12, 2005 Update: Written Testimony from ID Analytics that was submitted on Nov 9th, 2005 to the Subcommittee on Financial Institutions and Consumer Credit (Hearing on H.R. 3997, the “Financial Data Protection Act of 2005”) can be found here:

However, misuse rates could continue to increase drastically over time if the vibrant black market for “identitiesâ€? remains unimpeded. … By selling any amount of the remaining identities (those not able to be used because of the ‘feasible limit’), fraud rings could maximize the proceeds from their efforts and exact a far greater degree of harm to consumers, industry and government over time.

Yet another DRM malware alert

If this keeps up I may need a dedicated DRM category to keep up with the flow of malware released under the guise of protecting big-media profits at the expense of consumer rights. The Register reported today:

According to the EFF, the vulnerability centres on a file folder installed by the MediaMax software shipped on some Sony CDs, “that could allow malicious third parties who have localized, lower-privilege access to gain control over a consumer’s computer running the Windows operating system.â€? …other severe problems with MediaMax discs, including: undisclosed communications with servers Sony controls… undisclosed installation of over 18 MB of software regardless of whether the user agrees to the End User License Agreement; and failure to include an uninstaller with the CD.â€?

There is definitely a balance and a right way to do things in “digital copyright management” (DCM), which is what DRM should be called, but the fact that the EFF claims 30 other labels use this software means the big labels either do not realize the harm they are causing by distributing malware or they do not believe they are liable. A healthy market would find neither acceptable.

Diebold insider issues warnings

RawStory posted an “exclusive interview” yesterday. There are some harsh allegations that altogether appear to be a stern warning to stay away from Diebold systems until an independent and open validation is available:

Previous revelations from the whistleblower have included evidence that Diebold’s upper management and top government officials knew of backdoor software in Diebold’s central tabulator before the 2004 election, but ignored urgent warnings—such as a Homeland Security alert posted on the Internet.

[…]

The 2002 gubernatorial election in Georgia raised serious red flags, the source said.

“Shortly before the election, ten days to two weeks, we were told that the date in the machine was malfunctioning,”? the source recalled. “So we were told ‘Apply this patch in a big rush.’”? Later, the Diebold insider learned that the patches were never certified by the state of Georgia, as required by law.

[…]

Responding to public demand for paper trails, Diebold has devised a means of retrofitting its paperless TSX system with printers and paper rolls. But in Ohio’s November 2005 election, some machines produced blank paper.

The whistleblower is not surprised. “The software is again the culprit here. It’s not completely developed. I saw the exact same thing in Chicago during a demonstration held in Cook County for a committee of people who were looking at various election machines… They rejected it for other reasons.”?

Asked if Ohio officials were made aware of that failure prior to the recent election, the source said, “No way. Anything goes wrong inside Diebold, it’s hush-hush.”?

Most officials are not notified of failed demonstrations like the one in Cook County, the insider said, adding that most system tests, particularly those exhibited for sale are not conducted with a typical model.

California, which recently conducted a test of the system without public scrutiny that found only a three percent failure rate—far lower than earlier tests that found a 30 percent combined failure due to software crashes and printer jams.

Asked if the outcomes of the newest test should be trusted, the whistleblower, who does not know the protocols used in the California test, warned, “There’s a practice in testing where you get a pumped-up machine and pumped-up servers, and that’s what you allow them to test. Diebold does it and so do other manufacturers. It’s extremely common.”?

[…]

The Diebold insider noted that the initial GEMS system used to tabulate votes for the Diebold Opti-scan systems was designed by Jeffrey Dean, who was convicted in the early 1990s of computer-aided embezzlement. Dean was hired by Global Election Systems, which Diebold acquired in 2000. Global also had John Elder, a convicted cocaine trafficker, on its payroll.

Someone convicted of computer-aided embezzlement designed the system? Security clearance is mandatory for many government jobs related to handling sensitive information, one would think that election systems should be treated in a similar fashion. Diebold should be held to a strict burden of proof that their systems are safe, at this point, and not allowed to release any product for public consumption until all uncertainty has been thoroughly clarified.

Alternatively, perhaps Diebold management should ask their staff to use their own systems to vote on future direction for the company and swear that they will abide by the outcome. Live by the sword…

Gates wrong about spam

Apparently as many as 80% of people surveyed did not trust Gates in 2004 when he announced that spam would be gone by 2006. An article in today’s ZDNet suggests that within 30 days that number might jump to as high as 100%:

Bill Gates’ prediction of January 2004 that spam would be “a thing of the past” within two years has virtually no chance of coming true, according to security company Sophos this week.

Beware those who say “security will happen by x date”. True security is far more complex and subject to uncertainties than a short-term objective such as a functionality enhancement. Moreover, there are usually so many influential factors that it is better to say “security will have x control in place by y date” and predict a resultant soft “decline” rather than any “absolute” or “total” eradication.

ZDNet put it slightly differently when they covered the original announcement.

John Cheney, chief executive of email security firm BlackSpider Technologies, which conducted the survey, said the results show that the industry doesn’t perceive Microsoft as a security authority, despite its chairman’s enthusiasm for the task

To his credit, at least Gates did not land on the roof of Symantec in 2004 for a photo-op in front of a “Mission Accomplished” banner.

The Key to Recovery

Quantum announced that they think 2006 will finally be a good year to market security for tape backups. They just announced that they will be ready in the first quarter of 2006 to provide an authentication (locking) mechanism tapes:

Quantum’s DLTSage Tape Security is a firmware feature designed into its newest DLT tape drives that uses an electronic key to prevent or allow reading and writing of data on to a tape cartridge.

Sounds interesting. The two big hurdles to encryption on tape have been how to handle key management and the performance hit. With key management integrated first, Quantum still has to generate some buzz about performance. They mentioned it briefly in their DLTSage announcement, but it sounds like they are still working on what to do with the technology in an appliance they acquired:

The DataFort appliance provides wire-speed, transparent encryption and access controls for disk and tape storage systems, delivering best-in-class security, performance and key management for heterogeneous storage environments. In addition to the joint sales and marketing efforts with Decru, Quantum also plans to offer tightly integrated encryption and security management capabilities within its product line.

Quantum could be hinting that their encryption appliances will give way to a more integrated solution, which sounds like a reasonable and well-worn approach to enhancing big company legacy products with innovation via acquisitions. If the integration is successful I expect we will find ourselves without any good reason not to encrypt at the block-level, especially on recovery systems. Until then, it seems we must continue file-level encryption prior to backup.

So, is a lock on a tape worth the hassle? It does not comply with breach-notification laws and yet introduces risk of lost keys, so there’s no real ROI there, but it does pre-stage the backup processes with tighter authentication. And that may be worthwhile if you can ensure that time spent on key management now will help reduce the cost of encryption down the road (when performance is a truly dead issue).

Computer controls and conclusions

Donohue and Levitt are somewhat famous for their bold claim, published in the May 2001 edition of the Quarterly Journal of Economics, that legalized abortion has reduced crime.

The Economist just put forward an amusing update that discusses a Federal Reserve Bank of Boston working paper and counter-claim that is based on a re-test of the data and analysis of the computer code used by Donohue and Levitt:

Messrs Foote and Goetz have inspected the authors’ computer code and found the controls missing. In other words, Messrs Donohue and Levitt did not run the test they thought they had—an “inadvertent but serious computer programming errorâ€?, according to Messrs Foote and Goetz

Fixing that error reduces the effect of abortion on arrests by about half, using the original data, and two-thirds using updated numbers. But there is more. In their flawed test, Messrs Donohue and Levitt seek to explain arrest totals (eg, the 465 Alabamans of 18 years of age arrested for violent crime in 1989), not arrest rates per head (ie, 6.6 arrests per 100,000). This is unsatisfactory, because a smaller cohort will obviously commit fewer crimes in total. Messrs Foote and Goetz, by contrast, look at arrest rates, using passable population estimates based on data from the Census Bureau, and discover that the impact of abortion on arrest rates disappears entirely.

I look forward to the question of this programming “error” being addressed by Donohue and Levitt. It does not seem to refute the premise of their conclusion outright as much as question the methodology and provide an opportunity to fix a control and re-run the tests themselves.

The big question, of course, is still whether there are controls that have a direct relationship to reducing crime and at what cost.

Top 10 Data Disasters

On-Track has released their annual report on the top ten data disasters. It is a serious business, and OnTrack has built quite a reputation for saving the day(ta):

10. PhD Almost an F – A PhD candidate lost his entire dissertation when a bad power supply suddenly zapped his computer and damaged the USB Flash drive that stored the document. Had the data not been recovered, the student would not have graduated.

He must have been in a state of shock — “Teacher, the electricity ate my homework”.

4. Drilling for Data – During a multi-drive RAID recovery, engineers discovered one drive belonging in the set was missing. The customer found the missing drive in a dumpster, but in compliance with company policy for disposing of old drives, it had a hole drilled through it.

Can we please see the top tep without the remarkable recoveries included (just the failures)? That would be more interesting, I think. Or, as the infamous WWII story goes, if you are going to better-protect your pilots you should review planes that were shot down rather than just the ones that returned.

Apple Turn-Offs

Don Norman, former VP of Apple’s Advanced Technology Group, posted a comment on TedBlog about a common failure of Apple designs:

But now let me tell you my pet peeve: the on-off switch of both the regular iPods and the Shuffle. Historically, one thing Apple has always gotten wrong – on all products, big and small — is the power switch (I even wrote a book chapter about this once). The iPod on-off is a mystery to behold, a mystery to explain to others. The Shuffle is even worse. You have to slide a very-difficult-to-slide slider down some unknown amount. It has two settings, but no marking to let you know where you are. Actually, it has markings but they have zero correspondence to the switch setting. You know, this is NOT a tradeoff. Having a little mark on the sliding part and corresponding labeled terms on the fixed case would be trivial to do. Make usage smoother and easier. Cost no money. Bah.

Why is the slider so hard to slide? Their Industrial Designers seem not to have heard of friction — the fingers slip over the nice smooth surface, while the switch remains stationary. Finally when I finally squeeze really hard, the slider does move, but too far, to the wrong position. And those blinking lights. Secret codes that mean who-knows-what. It sometimes takes me 5 minutes to get my Shuffle to start playing, me continually sliding the switch up and down, pushing various buttons, watching lights go on, blink on, flash, turn various colors. All meaningless.

Just the other day I was reviewing racks of servers with bright warning lights. “What does that indicate?” I asked the admin responsible to see if they could decipher the code. Unfortunately, I was told something similar to what Don might have guessed, “no idea, but they seem to come and go.”

All the way from the personal mp3 player to the datacenter, the sole LED has become a cornerstone of messaging and yet no one seems to be very worried about learning how to interpret its meaning. The old-school hex number codes were one thing, but it seems like an amber or green light blinking erratically is almost guaranteed to be ignored.

To be fair, Don could have mentioned that Apple does provide an iPod shuffle reference card to break the codes.

I like the Check battery code: if you do not see a light, there is no charge. Ah, yes, and if your shuffle is wet, it must be raining.

PodCast Hijacking

Corante has an interesting warning about Podcasting security. It seems that if you’re not careful, someone else might be registering your podcast for you and (as a man-in-the-middle) waiting for an opportune moment to turn off their link and then blackmail you.

Ease of adoption strikes again. Authentication of an RSS feed might be a good idea, even if it adds a moderate amount of flexibility. Podcast certificates anyone?

Can you survive without a hard drive?

NEC has announced a new laptop that has no hard disk drive, perhaps with the intention of preventing any loss of confidentiality if a powered-off system is lost or stolen:

Local storage resides in the computer’s RAM, which is cleared when the machine is switched off, thus removing any potential security risk from data theft but also requiring a backup before the computer is switched off. This can be done with a central server or, should a network not be available, to a USB memory device, [a spokesman for the Tokyo company] says.

It’s a piece of mind for many, I’m sure, but most attacks still happen when the computer is still switched on and connected to a network. Just a few more thoughts:

1) This could be a glimse of the future when online security becomes so strong that remote attacks become truly remote, meaning the physical security of traditional PCs with massive local storage (80GB and more) may be the weak link of tomorrow.

2) Saving files to USB doesn’t seem like it provides any real consolation unless the USB device is encrypted or has some other controls (pill-format that can be easily swallowed?) to prevent loss. Not to mention USB fobs tend to be volatile and have the annoying habit of wiping themselves without warning, so I wouldn’t exactly rely on them without some kind of extra assurance.

3) This is likely to be transformed into something a little more practical such as an Internet cafe system, or public kiosk. Restart the system and you know it is clean. That type of environment would easily justify the extra expense. I don’t see the cost being justified in a personal laptop sense (yet) for the prior two reasons.

4) Personally, I would love to have an instant-on thin client interface at home, which would rely on a centralized redundant array of inexpensive disks. Nothing in the market is really there yet for the home user. Yet, the NEC system suggests we could be nearing an age when a true thin-client and server-like solution could be in every home (“honey, I think we need to upgrade the datacenter”). And then we could talk about home security in a similar manner to large corporations (layers and defense-in-depth) instead of a random smattering of desktops littered around a household trying in vain to share files and migrate profiles without excessive self-exposure.

Have to give NEC some credit for pushing the envelope on security. The last thing I saw from them was a massively-redundant 4U server that promised better than five nines (less than 5 minutes of down-time per year). See? You put that thing in your basement with HVAC conditioning and a few of these laptops around the house…as soon as the price comes down to earth I’m on it.

Cool company.

the poetry of information security