Sober Day, 2006

F-secure has an excellent write-up on their blog that details an impending Sober attack, scheduled for January 06, 2006:

Sober.Y was the biggest email outbreak of the year. It still is responsbile for around 40% of all the infections we see. This variant is programmed to activate on January 6th, 2006. After this date all the infected machines will regularily try to download and run a file from a website, forever. The virus even synchronizes the machines via atom clocks so the activation will not happen before January 6th, even if the clock of the computer is incorrect.

Scan early, scan often. But the more interesting part of their log entry is this:

The Sober virus author can precalculate the URLs. We wanted to be able to do the same thing. So we cracked the algorithm. This enabled us to calculate the download URLs for any future date. In fact, we did this already in May 2005, and we informed the local police in Germany as well as the affected ISPs. But we didn’t want to talk about it publically then – we didn’t want to fill in the virus writer on this. But he must know this by now.

And then they give examples of the URLs. Nice work.

Roaring Forties Australian Blue

The cheese of the day was the King Island Dairy Roaring Forties Blue. Another discovery at a local grocery store, the Blue had the appearance (and name) of a typical American blue cheese, but had far less bite and an awesomely smooth texture that is hard to find in domestic varities that tend to be dry and crumbly. After I polished off the last bit this evening I searched for the King Island website and found this helpful description:

A full flavoured blue with a sweet, slightly nutty character and good aftertaste. A rindless cheese matured in wax thus retaining its moisture and creating a smooth and creamy texture. A Roquefort style mould is used to create this unique and exciting cheese style.

Mmmm. A really great cheese. A bit of googling uncovered a recent newspaper review in the San Francisco Chronicle, which might be related somehow to the appearance of the cheese at a local grocery that boasts of a selection of over 3,000 wines:

My favorite among those I’ve tasted is the Roaring Forties Blue, a creamy, mild, blue-veined cheese from pasteurized cow’s milk. Local retailers tell me it is a customer favorite, too. […] Under the wax, you’ll find a moist, smooth and creamy blue with a mellow, almost sweet taste. It has neither the saltiness nor the pungency that characterizes many blues, which probably accounts for its popularity. Its lush, velvety texture calls for an equally luscious wine. Lustau’s Rare Cream Sherry, Solera Superior, accompanies it beautifully.

I couldn’t (and didn’t) say it any better myself, especially since I’ve never heard of those wines. I can just imagine that groceries in the future will have “hyper-linked” food. For example, when you pick up a cheese and put it in your cart, the cart’s interface will alert you to the appropriate selection of crackers and wine. Talk about a powerful and ubiquitous commerce model for information…

In the meantime, does anyone ever taste cheese with bourbon?

ID breach risk debated

ID Analytics has released a report that suggests ID theft from credit cards needs to be re-evaluated in terms of actual risk to consumers. Reuters picked up the report in today’s news and suggests that the report shows “where thieves access social security numbers and other sensitive information on consumers they have deliberately targeted — only about 1 in 1,000 victims had their identities stolen.” Reuters goes on to say:

After six months of study, comparing compromised information against credit applications, ID Analytics said it discovered something counterintuitive: The smaller the breach, the greater the likelihood the information was subsequently used by fraudsters to hijack the identity of victims.

“If you’re in a breach of 100, 200 or 250 names, there’s a pretty high probability that you’re identity is going to be used,” said Mike Cook, ID Analytics’ co-founder.

“The reason for that is if you look at how long it takes a fraudster to use an identity, they can roughly use 100 to 250 in a year. But as the size of the breach grows, it drops off pretty drastically.”

I do not think that is counter-intuitive at all. Small breaches are likely to be easily explained as directed attacks, as opposed to the more complicated investigation of a story that a box of tapes that have been misplaced. The Ford Motor Credit breach is an example of a massive breach that is both highly directed and that can continue giving consumers grief for many years after their data is stolen. So it is plausable that every “loss” will in fact be discovered to be a succesful attack or “breach” after the fact.

That being said, it is hard not to notice that Reuters claims first that “ID Analytics said it discovered that identity thieves have a hard time using a stolen credit cards to hijack the identity of cardholders because the cards are usually quickly canceled” and then they go on to conclude with a rather contradictory statement from the ID Analytics co-founder:

“As far as notifications, we think there are certain instances where businesses might want to notify consumers and certain instances where they might not to inform them,” said Cook. “For instance, if they lose data, and they don’t know where it is, we think too many notices may not be a good thing. They should probably monitor that and spend dollars on consumers who are actually harmed, rather than spending dollars on 10 million consumers” most of whom won’t be affected.

Where does the certainty come from? If you have lost the data, then presumably you have lost control of its use in the future. Who should be allowed to decide when it is safe to give up on an investigation and declare a “loss” as opposed to a “breach”? In addition, if notification and cancelling card numbers has made ID theft less easy, then why should you not notify all consumers when you have lost their data to ensure the maxium reduction of risk? Monitor to detect and catch a criminal I can understand, but I don’t follow the logic of “notification reduces risk when IDs are stolen, but you might not need to notify”. This reminds me of the theory that if trace amounts of a substance kills less than one in 1,000 customers than large companies might find it more profitable to just pay off the families who suffer rather than prevent the harm.

Perhaps the lack of clarity is because Reuters did not mention the “significant finding”, which can be found at the start of the report from ID Analytics:

A significant finding from the research is that different breaches pose different degrees of risk. In the research, ID Analytics distinguishes between “identity-levelâ€? breaches, where names and Social Security numbers were stolen and “account-levelâ€? breaches, where only account numbers—sometimes associated with names—were stolen. ID Analytics also discovered that the degree of risk varies based on the nature of the data breach, for example, whether the breach was the result of a deliberate hacking into a database or a seemingly unintentional loss of data, such as tapes or disks being lost in transit. […] Another key finding indicates that in certain targeted data breaches, notices may have a deterrent effect. In one large-scale identity-level breach, thieves slowed their use of the data to commit identity theft after public notification. The research also showed how the criminals who stole the data in the breaches used identity data manipulation, or “tumbling” to avoid detection and to prolong the scam.

Precisely. That makes perfect sense to me, as everyone wants a spectrum of risk to review not a black-or-white approach. And yet we should not forget that a vast majority of companies that house ID information still look at breaches from a “cup is half full” perspective and might not be in a suitable (expert) position to make intelligent decisions about the risks. Look at CardSystems, for example. The question is not whether people are trying to classify more levels of risk, but what is the quality of the data and their analysis — how qualified are today’s executives to make information security risk decisions on behalf of hundreds of thousands of consumers (assuming larger breaches will now automatically be determined to be lower risk)? Moreover, if you publicize reports that says huge breaches are lower risk and therefore exempt from breach disclosure, it seems obvious that clever criminals will simply shift to using huge breaches, no? That makes economic sense as well since a huge breach can be diversified to many criminals who will be able to commit ID theft more efficiently. They do not call it a black market for nothing, eh?

A SanDiego newspaper article from 2003 mentions that ID Analytics is a startup with “Citibank, Dell Financial Services, Sprint, Diners Club, Discover Financial Services and First North American National Bank as clients”. The company definitely seems to be a reputable source on the subject (“30 employees, including seven Ph.D.s”) with a timely business strategy that, according to the newspaper, almost could be an extension of the Payment Card Industry itself:

“I’m convinced they have a good product,” said Beth Givens of the Privacy Rights Clearinghouse in San Diego. “It’s just a shame that the financial industry didn’t take steps to fix these problems themselves.”

Givens contends that “credit issuers should have been spending more time on each credit application all along. And by more time, I mean, by two minutes.”

Hansen grimaced a bit when the gist of Givens’ criticism was recited.

He acknowledged that a financial institution might process as many as 10,000 credit applications an hour. But he says the industry also contends with a variety of challenges, including regulatory requirements to process each credit application within 30 days.

“What we’re trying to do is bring a technology solution to bear within the context of an automatic processing environment,” Hansen said.

In other news, Visa is asking assessors to re-certify, due to recent changes to the PCI data security standards, to the tune of over tens of thousands of dollars in training fees. That is a hefty chunk of money, even for veterans of PCI security, and believe it or not the standards are expected to change again in January 2006. Contrast this money-maker with the fact that Visa is giving away compliance scans for free outside the US and that the amount of credit card fraud has dropped dramatically over the past ten years due mainly to additional card verification measures (from billions of dollars to the low hundreds of millions). Fascinating stuff and a very exciting time to be in information security.

Dec 12, 2005 Update: Written Testimony from ID Analytics that was submitted on Nov 9th, 2005 to the Subcommittee on Financial Institutions and Consumer Credit (Hearing on H.R. 3997, the “Financial Data Protection Act of 2005”) can be found here:

However, misuse rates could continue to increase drastically over time if the vibrant black market for “identitiesâ€? remains unimpeded. … By selling any amount of the remaining identities (those not able to be used because of the ‘feasible limit’), fraud rings could maximize the proceeds from their efforts and exact a far greater degree of harm to consumers, industry and government over time.

Yet another DRM malware alert

If this keeps up I may need a dedicated DRM category to keep up with the flow of malware released under the guise of protecting big-media profits at the expense of consumer rights. The Register reported today:

According to the EFF, the vulnerability centres on a file folder installed by the MediaMax software shipped on some Sony CDs, “that could allow malicious third parties who have localized, lower-privilege access to gain control over a consumer’s computer running the Windows operating system.â€? …other severe problems with MediaMax discs, including: undisclosed communications with servers Sony controls… undisclosed installation of over 18 MB of software regardless of whether the user agrees to the End User License Agreement; and failure to include an uninstaller with the CD.â€?

There is definitely a balance and a right way to do things in “digital copyright management” (DCM), which is what DRM should be called, but the fact that the EFF claims 30 other labels use this software means the big labels either do not realize the harm they are causing by distributing malware or they do not believe they are liable. A healthy market would find neither acceptable.

Diebold insider issues warnings

RawStory posted an “exclusive interview” yesterday. There are some harsh allegations that altogether appear to be a stern warning to stay away from Diebold systems until an independent and open validation is available:

Previous revelations from the whistleblower have included evidence that Diebold’s upper management and top government officials knew of backdoor software in Diebold’s central tabulator before the 2004 election, but ignored urgent warnings—such as a Homeland Security alert posted on the Internet.

[…]

The 2002 gubernatorial election in Georgia raised serious red flags, the source said.

“Shortly before the election, ten days to two weeks, we were told that the date in the machine was malfunctioning,”? the source recalled. “So we were told ‘Apply this patch in a big rush.’”? Later, the Diebold insider learned that the patches were never certified by the state of Georgia, as required by law.

[…]

Responding to public demand for paper trails, Diebold has devised a means of retrofitting its paperless TSX system with printers and paper rolls. But in Ohio’s November 2005 election, some machines produced blank paper.

The whistleblower is not surprised. “The software is again the culprit here. It’s not completely developed. I saw the exact same thing in Chicago during a demonstration held in Cook County for a committee of people who were looking at various election machines… They rejected it for other reasons.”?

Asked if Ohio officials were made aware of that failure prior to the recent election, the source said, “No way. Anything goes wrong inside Diebold, it’s hush-hush.”?

Most officials are not notified of failed demonstrations like the one in Cook County, the insider said, adding that most system tests, particularly those exhibited for sale are not conducted with a typical model.

California, which recently conducted a test of the system without public scrutiny that found only a three percent failure rate—far lower than earlier tests that found a 30 percent combined failure due to software crashes and printer jams.

Asked if the outcomes of the newest test should be trusted, the whistleblower, who does not know the protocols used in the California test, warned, “There’s a practice in testing where you get a pumped-up machine and pumped-up servers, and that’s what you allow them to test. Diebold does it and so do other manufacturers. It’s extremely common.”?

[…]

The Diebold insider noted that the initial GEMS system used to tabulate votes for the Diebold Opti-scan systems was designed by Jeffrey Dean, who was convicted in the early 1990s of computer-aided embezzlement. Dean was hired by Global Election Systems, which Diebold acquired in 2000. Global also had John Elder, a convicted cocaine trafficker, on its payroll.

Someone convicted of computer-aided embezzlement designed the system? Security clearance is mandatory for many government jobs related to handling sensitive information, one would think that election systems should be treated in a similar fashion. Diebold should be held to a strict burden of proof that their systems are safe, at this point, and not allowed to release any product for public consumption until all uncertainty has been thoroughly clarified.

Alternatively, perhaps Diebold management should ask their staff to use their own systems to vote on future direction for the company and swear that they will abide by the outcome. Live by the sword…

Gates wrong about spam

Apparently as many as 80% of people surveyed did not trust Gates in 2004 when he announced that spam would be gone by 2006. An article in today’s ZDNet suggests that within 30 days that number might jump to as high as 100%:

Bill Gates’ prediction of January 2004 that spam would be “a thing of the past” within two years has virtually no chance of coming true, according to security company Sophos this week.

Beware those who say “security will happen by x date”. True security is far more complex and subject to uncertainties than a short-term objective such as a functionality enhancement. Moreover, there are usually so many influential factors that it is better to say “security will have x control in place by y date” and predict a resultant soft “decline” rather than any “absolute” or “total” eradication.

ZDNet put it slightly differently when they covered the original announcement.

John Cheney, chief executive of email security firm BlackSpider Technologies, which conducted the survey, said the results show that the industry doesn’t perceive Microsoft as a security authority, despite its chairman’s enthusiasm for the task

To his credit, at least Gates did not land on the roof of Symantec in 2004 for a photo-op in front of a “Mission Accomplished” banner.

The Key to Recovery

Quantum announced that they think 2006 will finally be a good year to market security for tape backups. They just announced that they will be ready in the first quarter of 2006 to provide an authentication (locking) mechanism tapes:

Quantum’s DLTSage Tape Security is a firmware feature designed into its newest DLT tape drives that uses an electronic key to prevent or allow reading and writing of data on to a tape cartridge.

Sounds interesting. The two big hurdles to encryption on tape have been how to handle key management and the performance hit. With key management integrated first, Quantum still has to generate some buzz about performance. They mentioned it briefly in their DLTSage announcement, but it sounds like they are still working on what to do with the technology in an appliance they acquired:

The DataFort appliance provides wire-speed, transparent encryption and access controls for disk and tape storage systems, delivering best-in-class security, performance and key management for heterogeneous storage environments. In addition to the joint sales and marketing efforts with Decru, Quantum also plans to offer tightly integrated encryption and security management capabilities within its product line.

Quantum could be hinting that their encryption appliances will give way to a more integrated solution, which sounds like a reasonable and well-worn approach to enhancing big company legacy products with innovation via acquisitions. If the integration is successful I expect we will find ourselves without any good reason not to encrypt at the block-level, especially on recovery systems. Until then, it seems we must continue file-level encryption prior to backup.

So, is a lock on a tape worth the hassle? It does not comply with breach-notification laws and yet introduces risk of lost keys, so there’s no real ROI there, but it does pre-stage the backup processes with tighter authentication. And that may be worthwhile if you can ensure that time spent on key management now will help reduce the cost of encryption down the road (when performance is a truly dead issue).

Computer controls and conclusions

Donohue and Levitt are somewhat famous for their bold claim, published in the May 2001 edition of the Quarterly Journal of Economics, that legalized abortion has reduced crime.

The Economist just put forward an amusing update that discusses a Federal Reserve Bank of Boston working paper and counter-claim that is based on a re-test of the data and analysis of the computer code used by Donohue and Levitt:

Messrs Foote and Goetz have inspected the authors’ computer code and found the controls missing. In other words, Messrs Donohue and Levitt did not run the test they thought they had—an “inadvertent but serious computer programming errorâ€?, according to Messrs Foote and Goetz

Fixing that error reduces the effect of abortion on arrests by about half, using the original data, and two-thirds using updated numbers. But there is more. In their flawed test, Messrs Donohue and Levitt seek to explain arrest totals (eg, the 465 Alabamans of 18 years of age arrested for violent crime in 1989), not arrest rates per head (ie, 6.6 arrests per 100,000). This is unsatisfactory, because a smaller cohort will obviously commit fewer crimes in total. Messrs Foote and Goetz, by contrast, look at arrest rates, using passable population estimates based on data from the Census Bureau, and discover that the impact of abortion on arrest rates disappears entirely.

I look forward to the question of this programming “error” being addressed by Donohue and Levitt. It does not seem to refute the premise of their conclusion outright as much as question the methodology and provide an opportunity to fix a control and re-run the tests themselves.

The big question, of course, is still whether there are controls that have a direct relationship to reducing crime and at what cost.

Top 10 Data Disasters

On-Track has released their annual report on the top ten data disasters. It is a serious business, and OnTrack has built quite a reputation for saving the day(ta):

10. PhD Almost an F – A PhD candidate lost his entire dissertation when a bad power supply suddenly zapped his computer and damaged the USB Flash drive that stored the document. Had the data not been recovered, the student would not have graduated.

He must have been in a state of shock — “Teacher, the electricity ate my homework”.

4. Drilling for Data – During a multi-drive RAID recovery, engineers discovered one drive belonging in the set was missing. The customer found the missing drive in a dumpster, but in compliance with company policy for disposing of old drives, it had a hole drilled through it.

Can we please see the top tep without the remarkable recoveries included (just the failures)? That would be more interesting, I think. Or, as the infamous WWII story goes, if you are going to better-protect your pilots you should review planes that were shot down rather than just the ones that returned.

the poetry of information security