Do Walls Work?

Strangely enough I’ve been getting this question lately from people who believe I might have an answer. Little do they realize how complicated the answer really is.

The short answer is (from a political economy view) that walls will be said to work when someone is trying to get them funded, and will be said to not work when the same people (or those who follow their folly) try to get all the other things funded (because walls easily fail, as everyone familiar with security can predict).

Before I go much further, let me briefly turn to the philosophical question of walls. One of the most famous Muslim scholars in the world, Muhammad Ali, probably best exemplified the answer to any questions about walls “working”. Here’s a eulogy to his wisdom, worth a watch in its entirety. For purposes here I’ve started towards the end at the relevant quote:

…life is best when you build bridges between people, not walls.

So if the celebrated genius of a fighter Ali tells us life is best with bridges, why build walls at all? And if security experts (defenders and attackers) so easily predict failures, why spend money on them? These are the kinds of questions every CSO should be well-prepared to answer. It’s basically the “why should I fund your project to disable connections, when the point of business is to enable them” meeting.

This goes to the heart of the Anti-Virus (AV) industry, and the current derivatives (Clownstrike, Cylance…you know who I’m talking about).

In the beginning days of viruses (early 1980s) there were theories about positive security models, which measured system integrity in a way today we talk about “whitelists”. If you want to run something on a computer you boil it down to the essence, the most efficient model and description, such that anything out of that ordinary baseline could be flagged as unusual or even adversarial.

Such a model of safety isn’t revolutionary by any stretch of the imagination. It was a simple case of people with some knowledge of the healthcare industry saying computer viruses could be detected by looking for what is abnormal, and emphasizing a thorough scientific understanding of what is normal.

Well, in healthcare there is motivation to spend on establishing such knowledge because “healthy” is valuable state of being. In computers, however, there was a giant loophole preventing this kind of science being developed. Companies like McAfee realized right away that if you just scare people with fear, uncertainty and doubt about imminent invasion by caravans of viruses you can get them to throw money into a wall (even though it doesn’t work).

I would make the usual snake-oil reference here, except I have to first point out that snake-oil has real health benefits.

…snake oil in its original form really was effective, especially when used to treat arthritis and bursitis.

The concept of a snake-oil salesman refers to some shady American guy stealing Chinese ideas and using cheap counterfeits to profit on harm to customers. Thus the McAfee model of building walls (today we talk about “blacklists” being ineffective, when really we could say fake snake-oil) for huge amounts of money started around 1987. At that time McAfee the man himself created a company to collect money for delivering little more than a sense of safety, while attackers easily bypassed it.

Unfortunately consumers bought into this novelty wall sold by McAfee, despite being mostly nonsense. The oportunity cost was massive and the security industry has taken decades to recover. Innovators trying to compete by achieving any kind of security “science” in operations were obviously far less profitable compared to the raft of snake-oil McAfee marketing executives.

Consider for example in 1992 McAfee told the world an invasion was coming and they needed him to build some more walls.

McAfee was blamed for creating a false threat to sell more of his anti-virus elixir – which he did. McAfee’s anti-virus software sales reportedly “skyrocketed” that year, with more than half of the companies in the Fortune 100 having purchased McAfee software. Of course, this only furthered the theory that McAfee had just made up the whole damn thing.

He retired after this, scooping up millions in profit by building walls that didn’t work for a threat that didn’t exist.

To be fair, threats do exist, and walls do have a role to play. Hey, after all we do use firewalls too right? And firewalls have proven themselves useful in a most basic way too, by having attackers shift to an application layer when all the other service ports are down.

In other words firewalls work in the way that building a wall could end up dramatically increasing threats coming through airports, seaports and even underground. Basically air, sea and land threats could increase and be detected less easily by building a wall. When I used to pentest utilities for example, we rated walls as significantly less effective at stopping us versus six-sided boxes (buildings, if you will).

True story: on a datacenter pentest I approached two layers of walls. The first was easily bypassed and then I used some engineering to get through the second one. It was only at that point I realized I was in the wrong location. Datacenters used to be careful to avoid having any outward logos or markings, even obfuscating their address. In this case it worked! After getting myself through two walls without much thought, I was looking right at an ICE logo and a bunch of guns.

Yes, I accidentally had tested the Immigration and Customs Enforcement (ICE) facility…and immediately began egress. Getting out quickly took some creativity, unlike getting in, and ended up being a better skills test. In the end it was fine, a laugh for everyone, including the datacenter (which I did test immediately after).

So in the strictest sense, walls have some work to do, and they may be capable of delivering. This is very different from saying walls work, however, when people are thinking in the broader sense of being safe from harm. A con-man like McAfee can vacuum up money to get rich while delivering almost no value, because “walls work” is a tiny grain of truth in his giant cake factory shipping nutrition-less lies about health (risk and safety).

1600s: Que ne mangent-ils de la croûte de pâté? (let them eat forcemeat crust!)

1700s: Qu’ils mangent de la brioche! (let them eat cake!)

1990s: Let me install AV!

2019: Let me build a wall!

Almost a decade ago I did a small speaking tour about cloud security on this topic, although I used the Maginot line as an example. This massive defensive wall was named after 1930s French Minister of War André Maginot, and constructed along the country’s border with Germany.

I pointed out that one could argue the Maginot line forced attackers to shift tactics and use other entry methods. In that sense those walls did some actual work, like a firewall or AV will do for you today.

However, expectations of the French were for the wall to prevent the very thing that happened (rapid invasion past their borders). In fact, the forces at the wall became so irrelevant, they still stood ready and willing to fight even after the French government capitulated to the Nazis. Let’s face it, had the French military leadership simply listened to all the active warnings about Nazis going around the line, France likely would have ended up saying the wall did a job to help focus their active response (they could have directed defenses to neutral country borders that had no walls).

The French leadership failed to notice something was not normal (enemy troops moving through the Ardennes Forest and violating neutral countries). And that is why their expensive wall continues to be almost universally remembered as a huge failure. (Some do still argue, as I did too, that Maginot’s plans worked within an extremely narrow assessment).

I don’t think any French to this day would say their wall worked however, given how it was billed to them at the time of funding (for an extremely high cost, which weakened more modern/important security needs like detection and radio/aero/rapid response).

For the French, the greatest failing of the Maginot Line arguably lay not in its conception, but in the opportunity costs that its construction imposed. The 87 miles of fortifications that were completed by 1935 cost some 7 billion francs ($8 billion in 2015 terms), over twice the initial estimate when the effort began in 1930. Depending on the source, the entire French defense budget in 1935 was between 7.5 (John Mearsheimer) and 12.8 billion (Williamson Murray) francs. As a result of this stupendous outlay, French military development in all other areas, from tanks to aircraft, suffered.

In other words, the current US regime is looking at data suggesting airports are the vulnerable path for entry and yet is proposing money be spent on something completely unrelated to airports. France in this scenario would be looking at data suggesting forests and neutral countries are the vulnerable paths for entry and blowing its budget on a wall elsewhere.

Terrorists trying to infiltrate the U.S. across our southern border was more of a theoretical vulnerability than an actual one…the figure she seems to be citing is based on 2017 data, not 2018, and refers to stops made by Department of Homeland Security across the globe, mainly at airports.

Does a wall on the border help with the real vulnerability in airports? No. The wall expense actually hurts, making the US materially less safe. One might conclude that shutting the government down, reducing active defenses at airports, to force a redirection of security funds to a useless wall is a very cynical plot that any hostile adversary would dream about.

To put a finer point on it, the expensive shutdown and the demand for an expensive wall both reflect the self-harming anti-American mindset of the current regime, and present grave dangers to US national security.

The long answer is thus that walls work at a very primitive level, which tends not to be worth the cost except in very particular cases where the predicted results are known and wanted. In the present context of the US border, there is no imminent threat and there is little to no chance of success without massive investment in detecting other methods of entry predicted (again, for a non-imminent threat).

There’s a reason AV is mostly free today. And it’s the same reason building a wall on the US border has been pitched as extremely expensive response to a fantasy threat, meaning it has little to no real value. Someone is trying to redistribute wealth and quit before people realize the walls are a distraction, where wasting time and money turns out to have been the objective (to hurt America).

History is pretty useful here, as we can easily prove things like walls have for thousands of years failed to prevent people climbing up (and down) them.

It is believed that the idea of a ladder was used over 10,000 years ago. We know this because pictures of them were discovered in a cave in Spain.

The ladder is also mentioned in the Bible. Jacob had a dream and in the dream he saw a ladder reaching from Heaven to earth.

Fun fact, ladders are much older than wheels. That’s right, ladders are more than twice as old as the wheel! And we obviously can say walls came before ladders. Thus always remember, when someone asks you which is older the wheel or the wall, go with the ladder (pun intended).

It remains to be seen, however, whether this sort of wall debate and debacle making the US less safe is going to force the US regime leader to step-down.

Incidentally, the Maginot example was not my only one on that speaking tour. Since I was invited to speak in England as well as the US, I thought it only fitting in 2010 that I use castle walls as an example of technology shifts, like a cannon, sawzall or a hypervisor escape vulnerability…the kind of inexpensive and fast-moving thing that makes wall builders shudder:

At Least Five LiDAR Challenges for Vehicles

Sensors Online has a nice summary of the current product management view for LiDAR manufacturers. They spell out these five concerns:

  1. Size
  2. Cost
  3. Reliability
  4. Range
  5. Eye Safety

Conspicuously missing from the list (pun not intended) is integrity of the data.

Reliability in the above list refers only to environmental risks (“replace the moving parts with a solid-state alternative with each component able to meet Grade 1 temperature and qualification”) instead of the sort of overconfidence in imagery I’ve spoken about in the past (August 2014 “Babar-ians at the Gate: Data Protection at Massive Scale,” Blackhat USA)

To be fair, the article is kind of a hidden marketing pitch, written by a company promoting its new line of products:

…patented flip-chip, back-emitting VCSEL arrays that combine high pulsed power arrays, integrated micro-optics, and electronic beam steering on a chip.

So it makes sense they aren’t going to talk about the more fundamental flaws in LiDAR that their company/product isn’t solving.

EV Charging Station Vulnerability

Anyone else read this article about the bug in a Schneider product?

At its worst, an attacker can force a plugged-in vehicle to stop charging

At its best, an attacker can give away power for free.

That’s basically it. A hardcoded password meant the power could be disabled, although really that means it could be enabled again too. Breaking news: a switch installed in public places could be switched without special switching authorization.

It’s kind of like those air pumps at gas stations that say you need to insert $0.25 when really you just have to push the button, or at least yell at the person in the station booth “hey I need some air here, enable the button so I can push it” and you get air for free.

Breaking news: I got some air for my tires without inserting a quarter. Someone call TechCrunch journalists.

Seriously though, it would be news if someone actually had to pay for a plugged-in tire to start filling.

If a gas station owner insists that you have to pay for air even after you’ve used the pump, stand your ground. If that doesn’t work, here’s the form to report the station to state officials.

That’s right, and speaking of denial of service…an attacker could even run off with a gasoline pump hose (they have safety release mechanisms) or an air hose. Such a brazen attack would leave cars that have tires and gas tanks without services when they pull into a station.

Fuel host disconnects do happen fairly often, by the way. So often there are even videos and lots of news stories about why and where it happens (protip: bad weather is correlated):

And yet TechCrunch wants us to be scared of EV cables being disconnected:

…unlock the cable during the charging by manipulating the socket locking hatch, meaning attackers could walk away with the cable.

Safety first, amiright? Design a breakaway and attackers can walk away with the cable…for safety.

Such a “vulnerability story” as this EV one by TechCrunch makes me imagine a world where the ranking of stories has a CVSS score attached…a world where “news” like this can theoretically never rise above stories with a severity actually worth thinking about.

An attacker could disable or enable a charging point, where charging status is something easily monitored and on a near-continuous basis. Did your car just stop charging? It’s something you and the provider ought to know in the regular course of using a power plug.

This ranks about as low as you can go in terms of security story value…yet a journalist dedicated a whole page to discuss how a public power-plug can be turned on and off without strong authentication.

Dust-sized battery-free AI sensor with RF-free wireless

The title of this post is the announcement I just received in a CES invite to assess product security. Well, technically it was a “VIP lounge” invite more than a “please break our product” invite, but I treat them the same if you know what I mean.

Perhaps most infamously when I went to CES many years ago and met with 3Com to review their brand new wifi access points (first to market), I immediately pointed out that hard-wired WEP keys was a VBI (very bad idea). 3Com product managers were unapologetic, citing usability as their ace card. “Nobody will use wifi if we make key management hard” they said like a blackjack player scooping all the chips into their lap. We both were right, but they no longer exist (acquired 2010 by HP and never heard from again).

I suppose today what stands out to me most about this new announcement is the “dust-sized” marketing.

Some may remember I have presented in by “big data security” talks specifically on the paranoia that should accompany any developments in dust level of tracking devices, as well as the ironic fact that if you walk in an obfuscating level of dust (more probably sand) it leaves obvious tracks.

Cretaceous period (127m year old) dust printsCretaceous period (127m year old) dust print

I’m looking forward to breaking this new product to point out the VBIs, and maybe even coming up with something like “sweep deprivation” models.

IBM Watson Sued by LA County for Secretly Tracking Users

Let’s get one thing out of the way. IBM’s Watson was instrumental to the Nazi Holocaust as he and his direct assistants worked with Adolf Hitler to help ensure genocide ran on IBM equipment.

When IBM’s director of worldwide media relations, John Bukovinsky, was asked about the disclosures in 2001 and 2002 of the company’s involvement in facilitating the extermination of millions of Jews, Gypsies and others, he replied, “That was six years ago [sic].” When a reporter pointed out that the Holocaust itself was some 60 years ago, Bukovinsky quipped, “So what. What is the point?”

The idea that IBM would want to market their big data system after the man notorious for meeting with Nazi leaders to deliver counting machines for genocide…it’s a pretty big sign that the evils of Watson are something to keep an eye out for even in the present day.

As Edwin Black wrote in “IBM and the Holocaust: The Strategic Alliance between Nazi Germany and America’s Most Powerful Corporation“:

Thomas Watson was more than just a businessman selling boxes to the Third Reich. For his Promethean gift of punch card technology that enabled the Reich to achieve undreamed of efficiencies both in its rearmament program and its war against the Jews, for his refusal to join the chorus of strident anti-Nazi boycotters and isolators and instead open a commercial corridor the Reich could still navigate, for his willingness to bring the world’s commercial summit to Berlin, for his value as a Roosevelt crony, for his glitter and legend, Hitler would bestow upon Thomas Watson a medal — the highest it could confer on any non-German.

Fast-forward to today and IBM’s Watson has been charged with user location tracking using an innocent-sounding weather app.

In a complaint filed Thursday in California state court, the city alleges IBM used detailed location data from users for targeted advertising and to identify consumer trends that might be useful to hedge funds, while at the same time telling consumers their location would only be used to localize weather forecasts. The suit doesn’t allege personally identifiable information was sold.

“Unbeknownst to many users, the Weather Channel App has tracked users’ detailed geolocation data for years,” the complaint alleges, calling the Weather Channel’s actions “unfair and fraudulent.” The complaint also says the Weather Channel profited from the data, “using it and monetizing it for purposes entirely unrelated to weather or the Weather Channel App.”

Again, it’s hard to fathom that IBM would want to name a big data machine Watson. It’s even harder to fathom that someone in IBM thought lying about user location tracking to monetize ill-gotten data was a good move…but then I just go back to them naming their machine Watson.