Skip to content

Update: Putting and End to the End of Active Defense

I recently read an article, “Putting an end to ‘strike back’ / ‘active defense’ debate…”, and another it linked to, “Managing The Legal Risks Of Active Defense,” wherein my friend Bob Clark was quoted.  Here is my response: 

Why in the world would we end the debate?  Security sucks and the bad guys have a huge advantage.  Our hands are tied.  Any debate that moves the discussion forward is a good thing. 

In the first article a guy calling himself Jericho chastises those who advocate Active Defense.  He equates it to strike back and hack back.  I have to say, I agree with two of his points; many companies are now trying to capitalize on this new term, yes new term, by offering what they call active defense or hack back tools.  In many cases this advertising is deceptive since the tools merely offer the same old software defenses under a new name.  I also agree that if your defenses don’t meet the basic standard, Active Defense is not an option.

I disagree with is his characterization of Active Defense.  I wish people would stop equating it to hack back.  Hack back is the last 1% of Active Defense.  See my definition here:

It is a method for companies who find themselves persistently attacked to collect the intelligence needed to evaluate the attacks, develop courses of action or options, and then enable the leadership to make well-informed decisions to move forward in an effort to protect the company.

On a spectrum the options could be anywhere from do nothing or the other extreme of hack back to either find the attackers or disrupt or deny the server(s) being used to launch the attacks. The intelligence collected will allow company leadership to make decisions at pre-determined checkpoints based on risk, liability and legal issues.

The initial decision whether to simply proceed with incident response versus Active Defense is based on determining whether the attack is a one-time incident or persistent, and how much money is being lost since. Active Defense will require the company to bring in a team of experts to accomplish the various tasks: intel collection, malware analysis, tool/technique development, evaluating legal, risk and liability issues, and therefore the cost involved must be weighed against the damage to the company or loss due to the attacks.

Also, I disagree with the many people who write in opposition to Active Defense and make broad statements about how it is illegal without defining Active Defense or detailing what they believe to be illegal or why.  If you’re not an attorney stop saying it is illegal because the legality of Active Defense is not black and white. 

Jericho’s assertions strike me as hypocritical by jumping on the bandwagon of the Active Defense flurry, making broad assertions and offering NO solutions.  If defense is so easy then provide the solution, a solution that hasn’t been tried and one that will work and not subverted by hackers within a few months.  Second, see my friend Davi’s response, here: "Putting and End to the End of Active Defense".  Good luck.

As for the article in which my friend Bob is quoted, I agree with Bob, for the most part.  You need a team of experts who know what they are doing, to include one or more attorneys who know what he/she is doing, but more than just an attorney you believe you can explain the technology to. 

This is not the kind of stuff you can just brush up on over the weekend.  This takes years of experience to understand the technology, apply the law and foresee the results or consequences.  Don’t believe it?  Ask your lawyer if he/she would be willing to put their law license on the line and provide advice in cyber security, hack back, the CFAA, ECPA, trace back, open-source collection, etc. 

What I disagree with is his comment that this is a no-win situation.  If you are a company owner and losing a lot of money or intellectual property, have tried everything else, and the attacks continue, you have a fiduciary responsibility to do something and self-defense may be your only option. 

Now, this does not mean jumping right to hack back.  My definition for Active Defense and what it entails is at the link above.  What it does mean is following a process, similar to incident response on steroids, and as the company leadership making critical decisions to protect the company.  In the end it may mean taking actions in self-defense and blocking or disrupting a CnC server or deleting your IP on a compromised server.  These options though are merely that, options in a process that requires a lot of Intel, thought and decision-making.

So, keep the debate going and don’t dismiss Active Defense as a no-win situation or illegal activity.

Posted in Security.

Tagged with , , , , , , , , , , .

Putting and End to the End of Active Defense

Today jerichoattrition wrote a provocative blog post called "Putting an end to ‘strike back’ / ‘active defense’ debate…" The magic phrase offered is this:

Ending the Debate In One Easy Line

If a company can’t do defense correctly, why do you think they can do offense right?

That simple, that logical.

Security experts are fond of saying security is a process not a destination. Continuous improvement is the aim, like balancing a bicycle, rather than aiming for a specific event and calling it done.

It is similar to keeping healthy or fit. As soon as you achieve a goal you set another and continue with your measurements and training.

But what if we could find a secret formula to settle our debates about security once and for all? What if we could utter one magical phrase to make everyone see things the way we see them — our vision of security as the final destination. Would anyone want that?

Sounds like a Twilight Zone episode to me. Someone wishes everyone would stop debating and just agree. Then, as soon as this dream comes true, the protagonist realizes a giant mistake has been made.

The camera pulls back and we see a man running frantically through the street, begging someone, anyone to debate or disagree. Instead, surrounded by smiling faces all he hears is "I agree!"

I agree. I agree...
I agree! I agree!

Do we really want that? What is simple or logical about saying good offense depends on good defense? This debate is far from over and that's a good thing…

Jericho's post does not explain away the fact that the two can be, and often are, mutually exclusive. The very foundation of a deterrence policy, for example, is an offense so effective that defensive capability becomes less relevant.

I'm tempted to point out the many sports teams with good offense and bad defense.

Instead, sticking with IT, a large enterprise that struggles to upgrade defenses still can have an effective offensive team. An offensive team in fact may be built faster/better/stronger to focus back on the enterprise itself to help pinpoint and improve slower/worse/weaker defenses.

Defense often is saddled with dependencies, depreciation issues, complexity, politics, etc.. Meanwhile an offensive team can quickly come directly into modern and advanced capabilities. In other words, building a highly effective offensive team is sometimes a strategic investment that can push an ineffective defensive team ahead.

A mismatch, with a better offensive team, means flaws can be found with visibility into risk posture, blasting through obstacles that held back better defense investments. This imbalance should be no stretch of imagination. It's common and has been happening for many years. Think of it as a football team that pits its lagging defense against its own top-ranked offensive line to pinpoint holes and improve defensive capabilities. Companies are hiring top red-team talent even when their blue-teams aren't top tier.

Back to the point of active defense, a highly-effective offensive team that is better than a defensive team simply could switch focus towards targets outside. That is why it is easy to see how a company that can't do defense right can do offense right.

The blog post also tries to warn us of a lack of solid definition for "active defense."

…note that recon is not ‘defense’. By port scanning, pinging, or tracerouting the remote system that attacked you, it does not help you defend your network. It is the first stage of an active response. Strictly based on the terminology of “active defense”, activity such as changing a configuration or creating real-time decoys to increase the cost of attack. Even today’s news, covering an entire talk on the legal risks of “active defense”, does not even define the term.

Recon is a part of defense, "it is the first stage", but it is not alone a defense. Agreed. But why are we worried that the definition isn't easy? That seems normal to me. Or why worry that a definition isn't found in one talk?

After reading the post I see more room for debate, more uncertainty and fear without solid explanation or supporting argument. Here are just four examples from where debate can easily continue:

If you can easily and positively attribute, they shouldn’t have breached your defenses. You have no business attacking them when you were negligent on defense 101.

Containment is more complicated than this view. Attribution may come later, as part of a decision process for limiting damage. Whether easy and positive attribution could be found within 1 minute or 1 day they would be post breach. Not every breach can be anticipated, which is why a common phrase responders use is "always prepared, never ready".

If you think you can positively attribute, you cannot, you are out of your element.

Again, overly simplistic view. Attribution is hard for some, easier for others. Hiding is effective for some, impossible for others. Most important is that practice makes attribution more accurate and there are many public cases of positive/successful attribution.

Even if you can miraculously attribute the human at the keyboard, regardless of how many hops back, you cannot positively attribute who hired them to hack you.

This is a decision-point rather than a dis-incentive. Responders can positively attribute deeper than just front line attacks. Anti-mob and anti-terror efforts reach source all the time. We can be just as effective.

If you attribute the person, and not the motive, by hacking back, you violated the law just as they did.

I have to point out here that legal advice from a non-lawyer is specious. Meet with a lawyer if you want to know when and how you will violate the law. As David Willson has written on this blog and presented many times, active defense is not a crime.

Posted in Security.

Red Means Go, Green Means Slow

While riding in late night taxis in Brazil I noticed they hit the accelerator through red lights. When we approached a green light, they would slow down and look around for people running the reds.

I had to ask why. The drivers said this is a risk mitigation strategy.

Because of assault danger, Brazilian drive through red traffic lights during night, just as a warning.

Since stopping at a red light, especially late at night, makes you an easy victim for car-jacking or robbery…we didn't stop.

And because everyone there knows drivers run red lights to stay safe, drivers with green lights slow down before crossing an intersection.

Just another example of why we should seriously reconsider stop-lights and their overall impact to risk (inefficiency of idling, yellow-light behavior, etc.)

Posted in Energy, Security.

The anti-virus age AIN'T over

Graham Sutherland wrote a provocative blog post titled "The anti-virus age is over." I hear this a lot and I often argue against it, as I did recently in a Twitter thread with @jeremiahg and @adamjodonnell.

I noticed Graham argues against his own title. His blog concludes:

Now don’t get me wrong, AV still has its place in the security world

Is an age over if there is still a place in the security world? I say no.

Cory Doctorow apparently does not come to the same conclusion, and instead used Sutherland's opening argument in his Boing Boing post called "When advanced black-hat hacking goes automatic, script kiddies turn into ninjas" to promote a fictional story of his own.

[The anti-virus age is over] was the premise and theme of my novella Knights of the Rainbow Table (also available as a free audiobook).

I confess I haven't read much by Doctorow since he ranted against American Airlines data collection practices. At that time I wrote the following response to his predicament:

I have always observed that wise travelers provide no more than the information that is directly relevant to the question being asked — the "most accurate" answer — which has neither too little nor too much detail. It's a fine balance, but part of the usual business of crossing International boundaries, obviously compounded by different cultural views of what constitutes suspicious or risky behavior.

Although I hate to question Doctorow's risk management vision again, it seems to me the anti-virus age will be over when we no longer see any place for anti-virus.

The age isn't over because our defense against polymorphic threats does not mean we should completely remove black-lists for non-polymorphic threats. Sutherland concedes this in the final text of his blog.

To put it another way, should we stop using seat-belts because we can get sick from bird-flu? Obviously not.

I tried to make this risk distinction in my 2012 RSA Conference presentation "Message in a Bottle: Finding Hope in a Sea of Security Breach Data." Here is how I laid out the age of seatbelts (sorry about the RSA template colors):

2012 RSA SF Conference Slide - Seatbelts

This view of history suggests to me that anti-virus software will become more integrated into the cost of our systems (like seat-belts became de-facto for cars and eventually a law). It will become less visible as it becomes integral.

So where are we headed? Analytic ability with data collection is what comes next, like air-bags were added to seatbelts. But the seatbelt analogy doesn't really work with intelligent, adaptive threats, as I also illustrated in my 2012 RSA Conference presentation (based on "Dr. John Snow's map-based spatial analysis and algorithm" for germ theory).

2012 RSA SF Conference Slide - Ghostmap

To follow Snow's footsteps our discretionary spend will shift towards data collection, anomaly detection and advanced response capabilities (e.g. big data security analysis). We will get better at finding and responding with new tools, while still using computer anti-virus and other old tools.

Posted in Security.

#HeavyD and the Evil Hostess Principle

At this year's ISACA-SF conference I will present how to stop malicious attacks against data mining and machine learning.

First, the title of the talk uses the tag #HeavyD. Let me explain why I think this is more than just a reference to the hiphop artist or nuclear physics.

The Late Great Heavy D

Credit for the term goes to @RSnake and @joshcorman. It came up as we were standing on a boat and bantering about the need for better terms than "Big Data". At first it was a joke and then I realized we had come upon a more fun way to describe the weight of big data security.

What is weight?

Way back in 2006 Gill gave me a very tiny and light racing life-jacket. I noted it was not USCG Type III certified (65+ newtons). It seemed odd to get race equipment that wasn't certified, since USCG certification is required to race in US Sailing events. Then I found out the Europeans believe survival of sailors requires about 5 fewer newtons than the US authorities.

Gill Buoyancy Aid
Awesome Race Equipment, but Not USCG Approved

That's a tangent but perhaps it helps frame a new discussion. We think often about controls to protect data sets of a certain size, which implies a measure at rest. Collecting every DB we can and putting it in a central hadoop, that's large.

If we think about protecting large amounts of data relative to movement then newton units come to mind. Think of measuring "large" in terms of a control or countermeasure — the force required to make one kilogram of mass go faster at a rate of one meter per second:


Hold onto that thought for a minute.

Second, I will present on areas of security research related to improving data quality. I hinted at this on Jul 15 when I tweeted about a quote I saw in darkreading.

argh! no, no, no. GIGO… security researcher claims "the more data that you throw at [data security], the better".

After a brief discussion with that researcher, @alexcpsec, he suggested instead of calling it a "Twinkies flaw" (my first reaction) we could call it the Hostess Principle. Great idea! I updated it to the Evil Hostess Principle — the more bad ingredients you throw at your stomach, the worse. You are prone to "bad failure" if you don't watch what you eat.

I said "bad failure" because failure is not always bad. It is vital to understand the difference between a plain "more" approach versus a "healthy" approach to ingestion. Most "secrets of success" stories mention that reaction speed to failure is what differentiates winners from losers. That means our failures can actually have very positive results.

Professional athletes, for example are said to be the quickest at recovery. They learn and react far faster to failure than average. This Honda video interviews people about failure and they say things like: "I like to see the improvement and with racing it is very obvious…you can fail 100 times if you can succeed 1"

So (a) it is important to know the acceptable measure of failure. How much bad data are we able to ingest before we aren't learning anymore — when do we stop floating? Why is 100:1 the right number?

And (b) an important consideration is how we define "improvement" versus just change. Adding ever more bad data (more weight), as we try to go faster and be lighter, could just be a recipe for disaster.

Given these two, #HeavyD is a presentation meant to explain and explore the many ways attackers are able to defeat highly-scalable systems that were designed to improve. It is a technical look at how we might setup positive failure paths (fail-safe countermeasures) if we intend to dig meaning out of data with untrusted origin.

Who do you trust?

Fast analysis of data could be hampered by slow processes to prepare the data. Using bad data could render analysis useless. Projects I've seen lately have added weeks to get source material ready for ingestion; decrease duplication, increase completeness and work towards some ground rule of accurate and present value. Already I'm seeing entire practices and consulting built around data normalization and cleaning.

Not only is this a losing proposition (e.g. we learned this already with SIEM), the very definition of big data makes this type of cleaning effort a curious goal. Access to unbounded volumes with unknown variety at increasing velocity…do you want to budget to "clean" it? Big data and the promise of ingesting raw source material seems antithetical to someone charging for complicated ground-rule routines and large cleaning projects.

So we are searching for a new approach. Better risk management perhaps should be based on finding a measure of data linked to improvement, like Newtons required for a life-jacket or healthy ingredients required from Hostess.

Look forward to seeing you there.

Posted in Energy, Food, Sailing, Security.

Diesel = Winning

The Economist in 2011 made a salient point about the future of gasoline vehicles:

For Toyota, taking BMW's diesel engines is a tacit admission that its hybrid strategy does not cut it in Europe.

That means a gasoline-hybrid strategy is failing. A diesel-hybrid strategy, however, would have worked.

Two years later, today, the Economist admits the race is over. Diesel has won. Gasoline is dying.

The Toyota Prius hybrid? A lowly twentieth on the league table of the most economical fuel-sippers, with 4.2 litres/100km, along with higher emissions of carbon dioxide. The 19 cars having better fuel economy than the Prius hybrid are all clean diesels.

It really makes you wonder why we don't have a hybid-diesel option instead of a hybrid-gasoline in America. At least GM has finally taken the lead domestically by releasing a passenger-car diesel option. It's not the car they talked about in 2008, but it's a start.

The Economist, however, is still thinking about the past. Instead of pointing out modern diesels already are available in America, they tell us diesels are on their way and Mazda is "leading". This sounds like nonsense to me:

Later this year, Americans will get their first chance to experience what a really advanced diesel is like—and why Europeans opt for diesels over hybrids, plug-in electrics and even petrol-powered cars. [Mazda's] diesel has 30% better fuel economy and provides oodles more pulling power. Good as the petrol version is, motorists who choose it over the diesel will miss out on a lot.

I'll tell you why their "first chance" talk is nonsense. Look at the Economist analysis of what has changed.

"What marks this latest generation of diesel engines from even their 'common-rail' predecessors of the late 1990s…"

That's a 20-year old reference — too far back to talk about predecessors. Several major generations of engine have hit American shores in-between the "late 1990s" and today. Why not compare current engines with those Americans have been driving in the 2000s?

Most notably was a major shift in 2004 when California regulated small passenger diesels out of the state (politics prevented regulation of the larger engines until later). Another major point was in 2008 when they re-appeared. This was no secret to the Economist. They wrote about it in an article called "Diesel’s second coming."

America's first chance to experience new diesel engine technology was around 2005 (when new VW TDIs were introduced) and then again in 2008 (when VW's diesel won car of the year). I would call 2004 and 2008 models the predecessors, taking us into the late-2013 technology.

So much for America's "first chance".

Perhaps it is fair to say some people haven't really been excited by diesel engines since the 1990s (the Economist gave Fiat Research an award) but who cares if those people have been asleep at their wheel since then. The Economist is not excused from doing research on the past decade of innovation.

It has been clear over the past ten years that diesel has achieved a pole position for increasing mpg while reducing emissions and providing performance and power. Combined with an electric motor, we're talking over 100 mpg and a fun driving experience. Volvo sold-out every diesel-hybrid targeted for France before they even left the factory.

Still, some are confused. Editors at Forbes offer us this non-prediction:

…it’s not yet obvious whether electric vehicles, diesels, or even compressed natural gas vehicles (there are millions in countries such as Iran and Brazil) will ultimately take the checkered flag in the race for efficiency

Millions in Iran? Never thought I would hear Forbes suggest Iran as a model of efficiency. Experts on Iran seem to paint it in the opposite light, a study of waste and unsustainable inefficiency:

…poor resource management have contributed to rapidly growing energy consumption and high energy intensity for the past decades.

What Forbes is really saying is "let's study engines in countries with domestic natural resources". This is NOT a good equation. No wonder Forbes is confused.

To me there is no question. The race for efficiency is really a race for freedom from resource dependence, related to national security. Diesel = winning.

The answer is obviously diesel-hybrid, given American driver habits/needs. No other engine competes when it comes to sustainability. As I have written here since 2005, it is the safest, easiest to deploy, most-resilient and yet best performing of all. It is no coincidence the military prefers diesel.

Natural gas presents an interesting alternative to the pollution of coal plants but, as the Economist itself has written in the past, it fails miserably with a car engine. It gets about the same mpg as gasoline with far less performance. Same problem as ethanol. That means a significant cost added to manufacturing with marginal benefits.

Electric vehicles are great performers, perfect for urbanites, yet they lack range and cost far too much to enter the market at a broad base. They also have major additional costs to factor such as battery maintenance and replacement, not to mention the occasional unexplained explosion and fire (e.g. even Boeing claimed they didn't see it coming).

Both natural gas and electric also require an overhaul to American infrastructure to enable vehicles with special engines. That's a pipe-dream (pun intended) and about as likely as hydrogen. When you think about how long broadband has taken to be upgraded in America, and how inexpensive that infrastructure is compared to fuel lines…

I wouldn't bet against diesel-hybrid.

Posted in Energy, Security.

American Fear of a Non-Motorized Planet

The area around Polk Street in San Francisco has experienced a high level of bicycle accidents. It has been ranked in the top five most dangerous streets. I can support this both quantitatively and qualitatively. Through nearly three decades of commuting by bicycle in the Twin Cities, London, Los Angeles and San Francisco the only place I have been hit by cars (twice!) is San Francisco.

In fact, in 1993 for more than six months day-or-night, rain-or-shine I rode 20 miles every day in central London and never once had impact with vehicles. (Other risks were higher: I was stopped and detained by anti-terror police once and I eventually was forced to reduce my daily ride time after a GP diagnosed me with serious respiratory damage from diesel-sulfur pollution).

Naturally the San Francisco Bicycle Coalition is looking at the same data. They work on traffic flow changes with urban planners to encourage cycling and reduce harm spots. This means increasing of non-motorist traffic, supporting higher-density of consumers and increasing sales for local businesses.

Studies have proven that an increase in safety for cyclists creates so much more non-motorized traffic that flow calculations have to be adjusted. Urban planners in London used to assume non-motorized traffic would equate to a quarter of a space used by motorized vehicles. It turns out to be much higher. This is an amazing development when you consider the potential density of bicycles and how much space is wasted by automobiles.

Separate Transport for London figures already show that cyclists now make 570,000 trips in London every day compared with 290,000 trips in 2001.

Blackfriars, Waterloo and London bridges are all now among the top 10 busiest cycle streets in London. On all of these, cyclists make up 42 per cent of traffic and 15 per cent of people – though they take up just 12 per cent of road space.

Almost 9,300 riders – 11 a minute – cross London Bridge a day.

Why aren't more people cycling, given the obvious advantages? Turns out that even as bicycling is soaring it has been held back by safety concerns linked directly to automobile-centric thinking.

The inquiry heard that more than 42 per cent of Britons own a bicycle, but only 2 per cent of journeys in the UK are made by bike.

Many people who would like to cycle do not feel safe enough, and the inquiry heard that all road projects and urban development must include high-quality cycle lanes as part of the planning process.

In 1904 20% of traffic in London was by bicycle. It's now planning to return to that level again because of increased value it brings across the board to a city (healthier citizens, cleaner air, higher density, lower infrastructure cost, more resilient to disasters, etc.) Look at how safety factors into their mayoral plan — a plan to reduce threat and harm from automobiles:

  1. A Tube network for the bike. London will have a network of direct, joined-up cycle tracks, with many running in parallel with key Underground, rail and bus routes
  2. Safer streets for the bike. Spending on the junction review will be significantly increased and substantial improvements to the worst junctions will be prioritised. With government help, a range of radical measures will improve the safety of cyclists around large vehicles
  3. More people travelling by bike. We will 'normalise' cycling, making it something anyone feels comfortable doing
  4. Better places for everyone. The new bike routes are a step towards the Mayor's vision of a 'village in the city', with more trees, more space for pedestrians and less traffic

Let's look at what higher density of more consistent-speed traffic really means.

In terms of commercial return, ask any shop owner or real-estate agent if they would rather see 1 customer enter their sales funnel or 10X customers. Instead of a single person taking up a giant parking space, or a full lane, we see the potential for 10X traffic for less cost. We also know that street-level advertisements (signs and store-fronts) are more effective on pedestrians and cyclists. That's the kind of low-impact scalability model any modern urban space should be rushing towards. You'd expect retailers to be leading the charge.

Despite these facts, many American businesses seem to be up in arms instead. Lose a parking space? Never. Look at the data? Impossible.

Believe it or not, some Americans consider pedestrian harm collateral damage acceptable in their automobile-centric life. A resistance to progress, despite obvious gains in traffic and better living conditions, comes from those who argue every parking spot translates to direct positive impact to the value of their property. The same group also seem to believe everyone can win a dead-end race to own the biggest vehicle on the road to stay safe and that pollution is a necessary evil.

It turns out that more carefully planned parking spaces, and even reduced motorized traffic flow, has an increased value to property investments. This is the converse of what car parking extremist believe. Of course common sense proves this. Suburbs, with the highest percentage of parking, struggle to hold value while urban spaces attract more people than ever despite a lack of car parking.

Consider also that the rate of driving is declining as people realize an American model of excessive automobile ownership (e.g. being stranded without a car) is the opposite of true quality-of-life values. Transportation options that make spaces clean and quiet with lower barrier to entry are where people are wisely spending money now (see "Young People Aren't Buying Cars" and "Young Americans Lead Trend to Less Driving")

So I just noticed that Boston news is facing a similar debate as in SF.

"There seems to be a knee-jerk reaction to the eliminated parking wherever something like this is proposed." said Pete Stidman, executive director of the Boston Cyclists Union. "We think if it was fixed up and made safer, you’d see an even huger increase in cycling there."

Hayes Morrison, Somerville’s director of Transportation & Infrastructure, said the city has reviewed the street with two parking studies, finding that there is ample parking available, and would continue to be after the reconstruction and loss of spaces.

Think about why a Boston or San Francisco is far more desirable to live in than a Los Angeles. It is like asking why people prefer to live near green spaces, parks and bicycle lanes instead of dangerous and polluted petroleum gulches like Polk St.

Trust me, I commuted by bicycle in Los Angeles in 1995 after I left London. It was nearly impossible. Bike lanes literally dead-ended at freeways with no options. I spent hours trying to map out routes that someone could actually ride and not be stranded by planners who ignored non-motorized traffic.

Bottom line is that areas of a city that have safe bicycle traffic will be the areas of density and prosperity growth. Cleaner, quieter…fewer cars, less parking, yet more people means better living. By comparison, neighborhoods that emphasize car parking are higher-risk, less desirable, less able to sustain heavy traffic and will lose value. Wasted space makes commerce more expensive with less return.

Until Polk street allows reasonable pathways for non-motorized traffic we all should avoid spending money in that area. Take your business elsewhere (e.g. Mission, Haight, FiDi), places that are working to maximize quality of life, reduce injury, and let us breathe easier. It's time to support areas invested in sustainable value. Stop protecting dead spaces for empty cars.

Posted in Energy, History, Security.

New GM Diesel Sportscar Beats Camaro Z/28

You may have noticed I'm fond of comparing highly-efficient diesel engines to sports cars. Two years ago I was writing comments on security blogs

I mean a four-door all-wheel-drive station wagon made by Volvo is expected to be available next year that delivers better horsepower than a Ferrari 308 and a Camaro Z28, yet will also provide 100 mpg. That should have been an American made vehicle.

And I was shamelessly plugging the same example into my security presentations (red cars at the bottom are the Ferrari and the Camaro)

In short, it seemed pretty cool to me that a modern Volvo diesel station wagon could get over 100mpg yet give better performance than a Camaro Z/28.

I see now that GM has actually delivered on this performance level themselves with their new Cruze Diesel. GM announced it as Cruze Clean Turbo Diesel Delivers Classic Muscle Car Torque.

Similarly, Jalopnik has run the headline "New Chevy Cruze Diesel And ‘72 Camaro Z/28 Are Basically The Same Car"

…better than a 350z, an Esprit Turbo (but not an Esprit V8) and a Ferrari F355. And it gets better fuel economy!

Cruzen on a Tractor
Cruze'n on a Tractor

That's what I'm talking about! No, wait. Cruze beats the Z/28. What do they mean same? A Z/28 would spend way more dollars and hours at a pump. In any race over distance Cruze wins.

258 ft-lb torque, 46 mpg, 717 miles/tank
(horsepower is dead)

Jalopnik is being facetious. I'm not. If Cruze was a diesel-electric hybrid, like the Volvo, it would beat the Z/28 on 0-60 also.

That shadow image comes from GM…dislike. The shadow should be a bald eagle flying, a running wolf, something that shows American freedom and performance. The shadow is meant to look like "classic muscle" but instead looks to me like a dirty, smelly tractor. And that would be exactly the wrong image to sell a diesel sportscar. Classic muscle? It doesn't even sound good.

Incidentally, if you get the gasoline version of the same car they'll tell you it can get almost 40 mpg. You have to search the fine-print to find that gasoline gets 100 ft-lb less torque. NO thanks on the gasoline engine.

Engine: Diesel Gas
Torque 258 148
MPG 46 36
Cost $25K $18K


The Cruze site points out that it outperforms the VW, which (surprise) is priced the same. Makes sense they're going head-to-head with another diesel in the market and price-matching but here too, dislike.

Instead they should have a number of vehicles to compare against. Where's my selector so I can do head-to-head with Ford, Kia, Toyota, Subaru…?

And let's see an ad with a Cruze Diesel versus a Prius pulling five people plus bags off the line. THAT would be funny.

Or GM could poke a little fun at itself and show a race between a Cruze and a Z/28 that includes fuel stops.

Or they could FOCUS on hitting Ford hard (pathetic 36 mpg max, no diesel option) and they could put up a fleet vehicle calculation engine that shows how you can save $20 million.

I mean let's talk about an easy buy decision. Do you want 46 mpg in a hotrod turbocharged clean diesel from GM versus a slow and thirsty Ford? BOOM, done. Do you want your city to save millions every year in staff time and cost, and reduce pollution? BOOM, done.

Going in a stock white sock up against a sexy dark grey VW with a long-standing following…mmm, not such a good idea.

Posted in Energy, Security.

BSidesLV 2013: Data Breach Panel

Come to BSides Las Vegas this year and see a discussion on breaches that promises to be heated and thorny but far from deserted.

A burglar steals an unencrypted powered-down laptop containing PII and is immediately hit and killed by a bus. Data breach? As more laws are passed there remain many difficult questions to answer. This panel will try. Come see opposed minds in the industry debate the ethics and economics of incident response and related regulations. We will debate things like: have the past 10 years of breach legislation helped or hurt our efforts in information security? When is a breach really a breach? Is it wrong to say any loss of control is a breach and must be reported? Do you agree there no safe harbor for encryption? Is it unduly costly on society if our breach definition is too broad?

Time: 12:30pm
Date: Aug 1
Location: G

  • Steve Werby
  • Phil Hagen
  • George Hulme
  • Jack Daniel
  • Raymond Umerley
  • Davi Ottenheimer


    Until Jack admits he's wrong

Posted in Security.

Repeal the Internet

Robert Samuelson wrote in the Washington Post "If I could, I would repeal the Internet"

He's kidding, right? This is some kind of funny snarky sarcastic opinion piece meant to ridicule FUDslingers, right? It is supposed to make us conscious of the dangers of isolationists, right? Doesn't seem like it.

He mentions several past threats that were "hyped" and it even seems like he believes Mandiant's marketing engine. Uh-oh.

…the Internet creates new avenues for conflict and mayhem. Until now, the motives for hacking — aside from political activists determined to make some point — have mostly involved larceny and business espionage. Among criminals, “the Internet is seen as the easiest, fastest way to make money,” says Richard Bejtlich, chief security officer for Mandiant, a cybersecurity firm. Recently, federal prosecutors alleged that a gang of cyberthieves had stolen $45 million by hacking into databases of prepaid debit cards and then draining cash from ATMs.

Anyone who has been reading this blog (hi mom!) knows I can be somewhat opposed to the messaging of Mandiant and Bejtlich. I believe they relentlessly magnify threats into bogeymen of unbelievable proportions while at the same time oversimplifying them. Even worse, they peddle secrecy and fight against transparency in our industry.

Samuelson's theory is possibly the fruit of their labor; an economist is scared of the Internet and banging a drum about risk in a major newspaper; a frightened result of Mandiant marketing. He doesn't explain trends in financial theft online; just repeats the old line that attackers get progressively more dangerous and so right now, this very instant, they are more dangerous than ever.

Look at what he says about "'infrastructure' systems (electricity grids and the like)", for example.

In the mid-1980s, most of these systems were self-contained. They relied on dedicated phone lines and private communications networks. They were hard to infiltrate.

That's quite an exaggeration and misrepresents the industry. Dedicated lines and private networks in many cases made containment a nightmare — easy to infiltrate. Do you have any idea how difficult it was to search for analog lines to ensure no back-doors existed? By the 1990s countless nights were spent wandering halls and fiddling with toneloc scripts because we were in a race with attackers to hit a dial tone that *shouldn't* be there. Containment failures wasn't a new concept in the 1990s; phreaking for access was at least 20 years old by then and certainly a problem in the mid 1980s.

Remember the 414 Gang in 1983?

Pranksters disrupt a hospital, and nobody is laughing

Here's a clue from 1983 that should really illustrate how "self-contained" systems were:

The flurry of recent, highly publicized incidents involving young systems hackers accessing government and commercial data bases has refocused attention on a variety of proposed and recently enacted computer crime laws, both state and federal.

Testimony of both victim and attacker in front of US Congress emphasized just how easy it was to infiltrate.

[Jimmy McClary, from the Los Alamos lab's operational security and safeguards division] and Mr. Patrick [one of the Milwaukee teen-agers who broke into dozens of large computer systems] said that because someone using a home computer could enter another computer just by dialing the wrong number, the law should differentiate between those who enter computer systems without malicious intent and those who deliberately attempt to alter or damage a system.

The fact is businesses are always clamoring to share information and they often install all kinds of rogue technology. Containment is violated as soon as the ability exists, which predates the 1980s. If anyone thinks executives are neatly standing in rows and following orders of their computer managers then they haven't done an assessment of containment in their life.

In other words take a quick look at real news from the mid-1980s. A similar situation of scaremongering and fear was bubbling up in America. It is dangerous to forget that we've seen these political machinations before. The movie Wargames released in 1983. The intel/mil community (e.g. 1980s equivalent of Bejtlich) was warning back then that they should be allowed to take control of the Internet away from civilians to protect us from harm.

As I presented to Bejtlich and others in 2011, electricity grids and the like have been proven easy to infiltrate for many, many years and this is not any reason to freak out. Bejtlich's response, a tweet during my presentation, was that I don't understand "sophistication" of attackers, and that I haven't seen what he has seen.

My problem with this logic is that Einstein told us "If you can't explain it simply, you don't understand it well enough". So if Bejtlich wants to argue that he isn't able to explain it simply and he doesn't want to share the data…well, that's good entertainment material for security horror films but it doesn't actually make it real. Does it?

During the mid 1990s it was obvious to auditors that infrastructure could be infiltrated. A big difference back then was that the energy industry thought they could dissuade anyone from trying. On one engagement alone for a multi-state bulk energy distribution company I looked at thousands and thousands of routers on the Internet all managed with clear-text authentication and no integrity monitoring. This seemed like the logical progression from the analog/modem risks earlier and, as usual, our ability to fix it was hampered by economics. To make a finer point the network admin running systems was begging for help from external assessors. He couldn't convince management to budget for better security controls.

We did our best to raise infiltration issues. Upper management reminded us we were just a portion of a larger "financial" risk model and strict laws for prosecution were sufficient disincentive. In other words we were working under a US gov position that since financial backers ran the energy business, if financiers were willing to accept risk then the gov would too. As I remember it, the financiers (e.g. banks) responded they were confident that systems were not connected to the Internet…. Yet there we were looking at evidence to the contrary. We ran into a dead-end because of politics and economics, not any real failure of technology.

This is a frequent issue in defense. You find gaps and then have to set about convincing people to make change in terms that are mired in human decision. I easily could end up on the same side as Mandiant in many ways. Of course I want fewer holes, tighter controls, etc. to improve the state of technical defense capabilities. However, I pull away from them when I see how they want to change opinions with a "sky fall" marketing push, especially when coupled with secrecy and lack of accountability. Crying wolf can have dire consequences for our industry.

Information technology isn't the only place this happens. Let me try to put things in terms of another historic event. President Eisenhower, born in Kansas, had an ambitious plan in the mid 1950s to connect the US with a system of high-speed roads called the Interstate. You might think his home state of Kansas would be his biggest supporter. It wasn't.

I grew up not far from a town in Kansas that was a few hills from where Eisenhower grew up. This town objected to the Interstate coming near. They had fears very similar to what I see in Robert Samuelson's post about the Internet infrastructure. Highways were not thought of as a breakthrough but rather a means for unwanted outsiders to reach them, to reduce their happy containment.

Avoiding access to the Interstate sounds insane today, right? The Interstate has become the economic engine of towns in rural and urban America. It is the link to the world that helps economies thrive by delivering people and supplies. An economist surely can see how this flow is critical to success. Dismissing information on the Internet, access to knowledge, as "shallow"…is hard to believe is a serious argument.

Of course we couldn't be as successful without access to knowledge. Innovation is a function of exposure. There are risks to exposure. Yet good can easily outweigh bad exposure when cost-effective controls are applied. Sometimes those controls are economic as well. This race we're in is not just between offense and defense, it is between health and disease, education and ignorance….

About 50 years after the Interstate was built (30 miles south of that little town) residents had to admit their mistake. They widened the artery and increased speeds; they knew the value of outsiders coming faster and more frequently was worth the risks. Don't forget, attackers are always evolving. The threats today are worse than ever.

Every business knows there is friction in supply-chains. Should we treat everything as threatening when one bad guy drives into town and robs a bank? Obviously not. Is there "shallow" value to Interstate traffic? Yes, mixed in with the high value. Can we handle threats? Yes, if we approach them rationally. Compare this with how isolationists fare.

I firmly believe connectivity is the future. We need more, not less, access to data to be successful in emerging markets such as clean energy and bioscience. Where we see risk we need more sophisticated solutions than just isolation or militarization.

The Internet's virtues are far, far from being overstated. We only are beginning to achieve potential benefits of better information exchanges. To shut off our connections now or put in the hands of the intelligence or military (or their advocates) would be a huge setback for America. We need to keep our networks open and under civilian control to focus on growth, unless under extreme danger (e.g. war); and if we ever must give up control we must have a clear and quick deadline for return.

Posted in Energy, History, Security.