The Legality of Offensive Hacking

This is Part I in a series of articles on hacking back or aggressive cyber defense. The questions I would like to explore, and ask for comments on are whether hacking back in self-defense is legal or illegal; ethical or unethical; should be pursued with clearly defined parameters or in a wild west manner, and more?

If you have read my article, “Hacking Back In Self-Defense: Is It Legal; Should It Be?,” you are aware that I believe hacking back in self-defense, in certain circumstances, is legal.

When I lecture on this topic though, I get at least one person in the crowd who is adamantly opposed and claims it is it illegal. Usually when their argument is analyzed it comes down to an ethical argument rather than a legal one.

So, in addition to the questions I have posed above, I ask one additional one: if your system has been compromised and is being used to attack my network or computers, do I have the right to hack back or aggressively defend my network against your attacking system, even if it means my defense may disrupt your computer or network?

Please provide comments below and I will continue in a few days.

Networking Food

One of the primary reasons Rudolf Diesel invented his engine in 1893 was to help ensure farmers were not dependent on an external/industrial source of energy, but rather could generate it on their own.

Unfortunately, the agriculture industry has gone the opposite direction from his (and the American populist platform of the People’s Party) and become entirely dependent on petroleum.

A new film made by Postgraduate students in London’s School of Oriental and African Studies (SOAS), where I did undergraduate work, looks at current food issues facing the UK.

Will a localized, resilient and redundant peer-to-peer energy and food model be able to displace the highly centralized, fragile and foreign-based client-server system advocated by petroleum companies?

Something tells me that the following statement on risk has more impact to policy than all combined comments by consumers feeling the pinch from rising petroleum costs.

“The Navy has always led the nation in transforming the way we use energy, not because it is popular, but because it makes us better war fighters,” stated [U.S. Navy Secretary Ray] Mabus.

SCADA exploits released: Siemens SIMATIC

Every time I hear people tell me how it would take a nation state budget with an army of trained cyber warriors to design and infiltrate systems I wonder where they get their data from. Billy Rios has been kind enough to argue against this not only in theory but by demonstrating just how easy it was for him to find vulnerabilities in the Siemens SIMATIC system. Now he has released exploit details.

Nothing sophisticated here:

If a user changes their password to a new password that includes a special character, the password may automatically be reset to “100”. Yes, you read that correctly… if a user has any special characters in their password, it may be reset to “100”. You can read about these awesome design decisions (and many others) in the Siemens user manuals.

And again:

For those non-techies reading this… what can someone do with this non-existent bug? They can use this to gain remote access to a SIMATIC HMI which runs various control systems and critical infrastructure around the world… aka they can take over a control system without knowing the username or password. No need to worry though, as there are “no open issues regarding authentication bypass bugs at Siemens.”

In his presentations he has pointed out that evaluation of the exploits is easy from the comfort of one’s own bedroom. In his latest post he also points to some (perhaps illegal) remote test options.

I’ve found MANY of these services listening on the Internet… in fact you can find a couple here: http://www.shodanhq.com/search?q=simatic+HMI
https://www.google.com/?#q=%22SIMATIC+HMI+Miniweb+on%22

A major tenet of my argument at the Dr. Stuxlove presentation was that we can do ourselves a serious disservice in risk management by overestimating the sophistication and talent of our adversaries. If the level of knowledge required to exploit a system is low then vendors will be under far more pressure to patch and fix.

Another interesting way of looking at this is to review the natural schism of resources in the security industry; there’s natural tension between remediation and investigation. Those monitoring for attacks may emphasize a presence of highly sophisticated adversaries because there is a direct link to their funding. If you put them into a complete risk equation and point out that vulnerabilities are easily fixed they will tell you that you just don’t understand how smart the people are that you are up against. Don’t be tempted to give them more money right away. That is the point at which you should ask them “define sophisticated”, which really means explain the details of vulnerabilities and the cost of remediation.

True security is to live a vulnerable lifestyle. When someone says driving a car safely is so sophisticated that you should spend millions on detection and investigation funds, you might be in a position to respond that wearing a seatbelt, installing airbags, brakes and suspension will work just fine for your risk management program. That is to say there is a balance of investment and overestimating the sophistication of threats may lead to less risk reduction than spending on innovation around the reduction of vulnerabilities.

Of course manufacturers first have to acknowledge that their emperor is naked — vulnerabilities are real.

For all the other vendors out there, please use this as a lesson on how NOT to treat security researchers who have been freely providing you security advice and have been quietly sitting for half a year on remote authentication bypasses for your products.

Since Siemens has “no open issues regarding authentication bypass bugs”, I guess it’s OK to talk about the issues we reported in May. Either that or Siemens just blatantly lied to the press about the existence of security issues that could be used to damage critical infrastructure…. but Siemens wouldn’t lie… so I guess there is no authentication bypass.

Siemens has faced embarrassing exposure of public security issues in the past, public disclosure of easy exploits, and has released advisories so it will be interesting to watch how this episode plays out.

Billy Rios is thus doing a great service by pointing our attention to something Americans should already be very familiar with. A Siemens SIMATIC is Unsafe at any speed: there are Designed-In Dangers in critical infrastructure systems.

Why Drones Crash

Time again to put up another post on an unmanned aerial vehicle (UAV), or unmanned aircraft system (UAS), or remotely piloted aircraft (RPA) or unmanned aircraft…ok, so let’s just call it a drone for now.

I pointed out a few days ago that the U.S. Gov’t in 2008 was formally warned of the extremely high rate of drone accidents.

…2008 report by the Congressional Research Service, the nonpartisan analytical arm of Congress, found UAVs have an accident rate 100 percent higher than manned aircraft.

Things apparently haven’t improved much since, three years later, we are all being subjected to headlines about yet another drone that got away. This should be of little surprise, given the trend.

An analysis of official Air Force data conducted by TomDispatch indicates that its drones crashed in spectacular fashion no less than 13 times in 2011, including that May 5th crash in Kandahar.

About half of those mishaps, all resulting in the loss of an aircraft or property damage of $2 million or more, occurred in Afghanistan or in the tiny African nation of Djibouti, which serves as a base for drones involved in the U.S. secret wars in Somalia and Yemen. All but two of the incidents involved the MQ-1 model, and four of them took place in May.

In 2010, there were seven major drone mishaps, all but one involving Predators; in 2009, there were 11. In other words, there have been 31 drone losses in three years, none apparently shot down, all diving into the planet of their own mechanical accord or thanks to human error.

Maybe the military will always report drone crashes as errors. That’s possible, I guess, especially as they often do clandestine and very remote things. Either way they have a high rate of failure that have been publicly linked to some basic risk factors that do not appear to be getting better.

I pointed out earlier also that the NTSB investigation of a U.S. drone accident had some clear recommendations for how to reduce failures. They said things like follow checklists, require supervision for inexperienced pilots…you know, the kind of stuff that reportedly isn’t being done when these drones have accidents.

The final leg of the doomed mission — in support of elite special operations forces — was being carried out by a pilot who had been operating Predators for about 10 months and had flown drones for approximately 51 hours over the previous 90 days. With less than 400 total hours under his belt, he was considered ‘inexperienced’ by Air Force standards and, during his drone launch and recovery training, had failed two simulator sessions and one flying exercise. He had, however, excelled academically, passed his evaluations, and was considered a qualified MQ-1 pilot, cleared to fly without supervision.

[…]

During the post-crash investigation, it was determined that the ground crew in Afghanistan had been regularly using an unauthorized method of draining engine coolant, though it was unclear whether this contributed to the crash.

[…]

Eventually, the Air Force ruled that a cooling system malfunction had led to engine failure. An accident investigator also concluded that the pilot had not executed proper procedures after the engine failure, causing the craft to crash just short of the runway, slightly damaging the perimeter fence at Kandahar Air Field and destroying the drone.

Some may have good reason to discuss whether all of the best American anti-jamming technology in the world could not prevent a drone from falling into Iranian hands, but that doesn’t do much to address the mounting data on accidents.