* An end-user will not be able to tell the difference between these counterfeits and authentic Petzl products (see below for more information)
* They have serious quality, performance and safety problems.
For these reasons, Petzl decided to alert its end-users and begin legal action against the counterfeiters.
This has many potential uses for both good and bad. It basically takes the old concept of secret tracking devices and tries to make them into security commodities for everyone to enjoy.
In an ACM SenSys 2010 paper, we present AutoWitness, a system to deter, detect, and track personal property theft, improve historically dismal stolen property recovery rates, and disrupt stolen property distribution networks. A property owner embeds a small tag inside the asset to be protected, where the tag lies dormant until it detects vehicular movement.
More to the point, from a market perspective, if we accept the commodity of electronics as a general argument then an encryption and backup/restore strategy is far simpler and less costly than tracking, capturing and recovering stolen electronics.
When someone grabs your iPhone and makes a run for it you will probably have a better piece of mind with encryption and recent backups than with trying to chase and detain the attacker. As someone at the RSA Conference said after he left his phone accidentally in a Taxi “even if I could get it back it would probably be bricked”.
Information is not really that much safer with the AutoWitness control option. It adds marginal value versus other controls and can actually introduce new risks. As an inexpensive device to monitor someone, on the other hand, it provides a *new* source of information — can add significant value at a lower cost than with other controls.
Nonetheless, just like a lot of the other forensics and investigation tools, I bet this will continue to be marketed as a disaster recovery solution.
For some teams, especially teams that are not building out-of-the-box simple web apps, and Agile teams that are following Continuous Delivery with frequent deployments to production, or Continuous Deployment updating production several times a day, that’s a lot of work.
And WAFs add to operational cost and complexity, and there is a performance cost as well. And like a lot of the other appsec “solutions†available today, WAFs only protect you from some problems and leave others open.
I do not disagree in principle, but this is just another way of saying we want something more effective for less cost.
As long as we’re posting our wishes why not push the onus back onto developers? Can’t they just develop more useful and secure code for less cost?
It has to be simpler. It’s too hard to write secure software, too easy for even smart programmers to make bad mistakes – it’s like having a picnic in a minefield. The tools that we have today cost too much and find too little. Building secure software is expensive, inefficient, and there is no way to know when you have done enough.
There aren’t any easy answers, simple solutions. But I’m still going to look for them.
Can’t hurt to look, right? There has to be an easy assembly-line way to make coding more like making a picnic basket from McDonalds instead of all the complicated and messy work of cooking in a kitchen…even for a day in the minefields. Good analogy, Jim. That security problem was easy to solve in the real world, right?
Clearing minefields is a long, slow, time-consuming process, and there is no room for error.
Oh well, move along. Nothing to see here. Don’t look at Jim’s poor analogy blown to bits.
The Environmental Protection Agency says they have settled with the manufacturer of Crocs over a case of unproven health claims.
Perhaps Henry Ford put it best, when he famously said the cost of practicing security was never justified:
Security is bunk. If you are safe, you don’t need it: if you are breached it is too late.
Ok, I confess I adapted that. He actually was speaking about the cost of exercise to stay healthy…
Exercise is bunk. If you are healthy, you don’t need it: if you are sick you should not take it.
On the contrary, the low cost of exercise (while you don’t “need” it) may in fact be part of the benefit. You invest while you are healthy as a preventative measure because if you try to use shortcuts or put it in later you will not achieve the same return on investment.
Back to the WAF, Jim might find that “a lot of work” spent on security for the firewall might actually be worth it in terms of understanding security of his apps better, improving them overall, as well as preventing breaches and known attacks. I wager he will find the cheap and easy cure for application security around the same time that he finds the cheap and easy cure for health.
Even if you find it, it might not go where you want today (Photo by me)
Authentication != Authorization: The User Cannot Be Trusted
Mass Assignment Will Ruin Your Day
NoSQL Doesn’t Mean No SQL Injection
Take Care With Releasing Software To End Users
Is Diaspora Secure After The Patches?
Although I can get a good chuckle out of the following conclusion, I think it misses the point entirely.
The team is manifestly out of their depth with regards to web application security, and it is almost certainly impossible for them to gather the required expertise and still hit their timetable for public release in a month. You might believe in the powers of OSS to gather experts (or at least folks who have shipped a Rails app, like myself) to Diaspora’s banner and ferret out all the issues. You might also believe in magic code-fixing fairies. Personally, I’d be praying for the fairies because if Diaspora is dependent on the OSS community their users are screwed. There are, almost certainly, exploits as severe as the above ones left in the app, and there almost certainly will be zero-day attacks by hackers who would like to make the headline news. “Facebook Competitor Diaspora Launches; All Users Data Compromised Immediately” makes for a smashing headline in the New York Times, wouldn’t you say?
I never will forget in 2009 when I met a few Facebook employees and expressed my concerns with the security flaws I had found. Their response was “yes, we know it’s riddled with holes, but security is not our focus”.
Thus began my immediate removal of personal information and closure of my profile.
Diaspora may be bad, but it is still very early. Can it be worse than handing over data to an ancient (by social network standards) private company ruled by a man funded by Russians without any transparency that most likely hopes to profit from your loss (of privacy)?
Zuckerberg faced serious charges of breach of security, violating copyrights, and violating individual privacy.
Also, if you believe at all in social networking and collaboration Diaspora makes a lot more sense than the centrally-planned autocratic regime of Facebook. Diaspora follows an open market model where its users can assess the security of the code, collaborate to write/test fixes and even run different pods to a level of privacy they find acceptable.
None of that is true for Facebook, which is what Columbia University law professor Eben Moglen accurately termed anti-democratic “spying for free” (surveillance capital).
In other words, Diaspora might be insecure because it lacks talent but not because it lacks the potential to become secure. It can be as good as the community that uses it and based on their vision — the whole being greater than a sum of parts and so on, in the spirit of being socially networked. Facebook’s security potential by comparison is in steady decline by design and will only get worse; every move by their leader has been to reduce your privacy for profit.
You might believe that its founder and his lenders will have a change heart. You might also believe in magic management-fixing fairies….
Personally, I’d be praying for the fairies because if Facebook is dependent on Zuckerberg their users are screwed.
Posted on February 22nd, 2011 at 7:08 pm […] My Facebook account’s web settings specify full-time encrypted traffic, but this apparently isn’t honored or supported by Facebook’s Android app. Facebook isn’t doing anything like OAuth signatures, so it may be possible to inject bogus posts as well. Also notable: one of the requests we saw going from my phone to the Facebook server included an SQL statement within. Could Facebook’s server have a SQL injection vulnerability?
a blog about the poetry of information security, since 1995