Stupid Security Awards

This is kind of funny and sad at the same time:

A civil rights organisation wants to hear examples of security measures that are so ill-advised, impotent or irritating that they should be named and shamed.

Privacy International (PI) announced on Monday that it is holding the “Stupid Security Awards” in an attempt to highlight the absurdities of the security industry.

I guess the positive aspect of this is that it brings security into a sphere of communication and discussion that could lead to improvements, if improvements are really the desired outcome. I worry that the trade-offs will not be discussed in a fair light, since it could easily become a “we hate helmets and we’re louder than you so repeal the law now” festival.

Does it count if you are in the security industry and create an ill-advised measure just to win the award? That would be sooo fitting if someone were to game a system meant to highlight stupidity in security measures.

FCC (still) investigating fake news

The Center for Media and Democracy (CMD) did a study of news stations in America and documented many cases where the true source of video news releases (VNR) was not clearly disclosed:

KOKH-25 in Oklahoma City, OK, a FOX station owned by Sinclair, aired six of the VNRs tracked by CMD, making it this report’s top repeat offender. Consistently, KOKH-25 failed to provide any disclosure to news audiences. The station also aired five of the six VNRs in their entirety, and kept the publicist’s original narration each time.

The FCC responded by announcing an investigation of their own, as noted by the CMD:

If the Commission determines after investigation that a licensee has violated sponsorship identification rules, the FCC may impose monetary fines of up to $32,500 per violation, and initiate license revocation proceedings against licensees. Section 507 of the Communications Act establishes civil and criminal penalties for violation of disclosure requirements, with the possibility of a fine of up to $10,000 and as much as a year of imprisonment.

The fines go higher for repeat offenses but there appears to be a maximum of less than $500,000. The CMD report highlighted over seventy stations, including more than twenty from Walt Disney Company’s ABC network and seven from the Sinclair Broadcast Group Inc.. Wonder if the FCC will find more and how far back in time they will consider, now that video archives are available almost anywhere.

The Independent has a story that the FCC investigations started almost immediately after the CMD report was published last April:

Federal authorities are actively investigating dozens of American television stations for broadcasting items produced by the Bush administration and major corporations, and passing them off as normal news. Some of the fake news segments talked up success in the war in Iraq, or promoted the companies’ products.

Fake news segments talked up success? Sounds like politics as usual, and why Roosevelt created the FCC in the first place.

3D security

I was impressed to find that you can buy an entire Tyrol Castle for $14.95, only a few more dollars than a Bulky Conifer on the Cornucopia3D marketplace. If you buy one conifer can you make forest, or are you charged per tree?

A large cottage will run you $8.95, unless you want a non-copy-protected version, which costs $13.25. Hmmm, only one dollar more and you could have had a copy-protected version of the Tyrol castle. The castle seems like a bargain.

Hard to duplicate your neighbors house in real life without costly materials, but in the virtual world anything seems possible. What if you could copy his wife? The security of 3D/virtualization information has only just begun.

Brain Training

Interesting analysis of the Brain Training phenomenon. I would hardly call it a “trojan horse” or claim that it is not a game, but otherwise a nice description of what makes the software so popular.

Now, if they could just have a few questions/quizzes to make people more careful with how they treat their (and others’) data.

As a side note, I had several conversations about user education today and then I found Bruce Schneier had posted something about it on his blog as well. He mentions a counterpoint by Marcus Ranum, but this just made me think of the fundamental differences in perception by the soft-spoken Bruce versus tough-as-nails Marcus. They both seem to love educating others and spreading the word about good security practices, but both also make a case for a world with less need (Bruce by creating incentives for people to make simpler technology and Marcus by creating incentives for peope learn risks faster).

One big problem with Marcus’ theory, incidentally, is that people may never realize the true cause of failure was not incidental. I have heard this referred to as the theory that a cat might not step on a particular hot stove again after they get burned, but they tend not to translate their experience into “red stove tops are hot, don’t step on them again”. I posted a comment on Bruce’s blog to highlight an issue with his theory of “just make things safer”.