Skip to content


6 Things Washington Doesn’t Get About Hackers

That’s the title of the original article someone tweeted today from foreign policy. I don’t really know who the author is (Micah Zenko – @MicahZenko) and it doesn’t really matter. The article, pardon my French, is complete bullshit. I’ve been a hacker for more than two decades and it pains me to see this supposed guide for policy.

Executive summary

After years of someone socializing at parties in conferences…ahem, researching hackers, six very unusual and particular things have been painstakingly revealed. You simply won’t believe the results. This is what you need to “get” about hackers:

  1. Wanna feel valued
  2. Wanna feel unique
  3. Wanna be included
  4. Wanna feel stable
  5. Wanna be included
  6. Wanna feel unique

Again, these shocking new findings are from deep and thorough ethnographic research that has bared the true soul of the hidden and elusive hacker. No other group has exhibited these characteristics so it is a real coup to finally have a conference party-goer…ahem, sorry, researcher who has captured and understood the essence of hacking.

Without further ado, as a hacker, here are my replies to the above six findings, based on their original phrasing:

1. Your life is improved and safer because of hackers.

Agree. Hacking is, like tinkering or hobbying, a way of improving the world through experimentation out of curiosity. Think of it just like a person with a tool, wrenching or hammering, because of course your life can improve and be safer when people are hacking.

Some of the best improvements in history come from hacking (US farmers turning barbed-wire into ad hoc phone lines in the 1920s, US ranchers building wire-less relays to extend phone lines to mobile devices in 1960s).

Arguably we owe the very Internet itself to hackers helping innovate and improve on industry. Muscle cars at informal race days are another good example…there are many.

Unfortunately Zenko goes way off track (pun intended of course) from this truism and comes up with some crazy broken analysis:

…products were made safer and more reliable only because of the vulnerabilities uncovered by external hackers working pro bono or commissioned by companies, and not by in-house software developers or information technology staff

This is absolutely and provably false. Products are made safer by in-house hackers as well as external; you don’t want too much of either. To claim one side is always the good side is to show complete ignorance of the industry. I have worked on both sides many times. It should never be assumed one is the “only” one making the world improved and safer.

2. Almost every hack that you read about in your newspaper lacks important context and background.

This phrase makes little sense to me. As a historian I could easily argue everything lacks background that I like to see, yet many journalists do in fact use important context and background. I think his opening phrase was meant to read more like his concluding sentence:

The point being that each publicly reported hack is unique onto itself and has an unreported background story that is critical to fully comprehending the depth and extent of the uncovered vulnerabilities.

Disagree. Not every public reported hack is unique onto itself. This is a dangerously misleading analytic approach. In fact if you look carefully at the celebrated DEFCON 23 car hacking stories both obscured prior art.

Perhaps it is easy to see why. Ignoring history helps a hacker to emphasize uniqueness of their work for personal gain, in the same way our patent process is broken. Lawyers I’ve met blithely tell patent applicants “whatever you do don’t admit you researched prior work because then you can’t claim invention”.

Yet the opposite is really true. Knowledge is a slow and incremental process and the best hackers are all borrowing and building on prior work. Turning points happen, of course, and should be celebrated as a new iteration on an old theme. My favorite example lately is how Gutenberg observed bakery women making cookies with a rolling press and wondered if he could do the same for printing letters.

I would say 2009 was a seminal year in car hacking. It was that year hackers not only were able to remotely take over a car they also WERE ABLE TO SILENTLY INFECT THE DEALER UPDATE PROCESS SO EVERY CAR BROUGHT IN FOR SERVICE WOULD BE REPEATEDLY COMPROMISED. Sorry I had to yell there for a second but I want to make it clear that non-publicly reported vulnerabilities are a vital part of the context and background and it’s all really a continuous improvement process.

It was some very cool research. And all kept private until it started to become public in 2012, and that seems to be when this year’s DEFCON presenters say they started to try and remotely hack cars…even my grandmother said she saw this 2012 PBS NOVA show:

2012-nova-car-hacking

No joke. My grandmother asked me in 2012 if I knew Yoshi the car hacker. I thought it pretty cool that such a wide audience was aware of serious car hacks, although obviously I had that scope wrong.

grammahacker

And going back to 2005 I wrote on this blog about dealers who knowingly sold cars with bad software of severe consequences (unpredictable stall).

To fully comprehend the depth and extent of “uncovered vulnerabilities” don’t just ask people at a publicity contest about uniqueness. It’s like asking the strongest-in-the-world contestant if they are actually the strongest in the world.

If you’re going to survey some subset of hackers please include the hackers who study trends, who seek economical solutions and manage operations, as they are the more likely experts who can speak to real data on risk (talent) depth and extent.

Being an “expert” in any field is about a willingness to learn, and teach, every single day — in a changing landscape. As much as the younger hackers would try and have the journalists believe there is no shortcut (aside from platform and audience reach); true expert hackers simply have for a long time embraced a routine (wax on, wax off).

But here’s the real rub: if you take the stunt hacks of great publicity (a strong man event in the circus) as totally unique and not just a trumped-up version of truth, you will let real liabilities slip away. You will be distracted by fluff. It’s dangerously wrong to underestimate threats because you simply believe a self-promotion snowflake story.

Journalists and hackers can easily collude, like a circus MC and the world’s strongest man, motivated by bigger audiences. The real and hard question is whether some group’s common knowledge is being exposed more widely, or you are seeing something truly unique.

3. Nothing is permanently secured, just temporarily patched.

Agree. This is self-evident. But the author misses the point entirely.

First, this third section completely contradicts the prior one. The author now makes the fine point that hackers are always learning and borrowing from prior hacks, telling a story of continuous improvements…after telling us in the second section above that every public hack is unique unto itself. More proof that the section above is broken.

Second, have you ever played a sport? Soccer/football? Have you ever studied and instrument? Studied a subject in school? Have you used a tool like a wrench or a screwdriver? No screw is permanently turned (I mean we’ve even had to invent self-tightening ones). Nothing in science is permanently learned.

Believe it or not there is a process in everything that tends to matter more than that one time you did one thing.

Constant. Improvement. In Everything.

It is shocking to travel through countries where progress ended, or worse, things fall apart and reverse. It really hits home (pun not intended) how nothing is permanent. But the author instead seems to think hackers are some kind of unique animal in this regard:

“Cybersecurity on a hamster wheel” is how longtime hacker Dino Dai Zovi describes to me this commonly experienced phenomenon…I spoke with Zheng and Shan after their presentation, and they explained that the hack took them about a month of work, at night after their day jobs.

That’s common human behavior. You work at something over and over to get good at it. There’s nothing really special to hackers about it. They’re just people who are spending their time practicing and trying things to get better, usually with technology but not exclusively.

4. Hackers continue to face uncertain legal and liability threats.

I don’t really believe Washington doesn’t get this about hackers. I mean laws are uncertain, and liability from those uncertain laws is, wait for it, uncertain. Kind of goes without saying, really, and explains why we have lawyers.

Sounds better to me as “people continue to face uncertain laws and threats of liability from hacking”. Perhaps it would be clearer if I put it as “hackers continue to face uncertain weather”. That’s true, right? The condition of uncertainty in laws is not linked to hacker existentialism.

So I agree there is uncertainty in laws faced by people, yet don’t see how it belongs in this list meant to tell Washington about hackers.

5. There is a wide disconnect between cyberpolicy and cybersecurity researchers.

When are researchers and policy writers not far apart? How is a general disconnect from policy writers in any way unique to hacking, let alone cyber? You can please some of the people all of the time or all of the people some of the time, etc.

there are still too few security researchers and government officials willing or courageous enough to communicate in public

Actually, I see the opposite. More people are rushing into security research than ever before precisely because it gives them a shot at being a public talking head. Ask me what it was like 20 or even 10 years ago when it took real courage to talk.

I’ll never forget the tone of lawyers in 1996 when they told me my career was over if I even spoke internally about my research, or ten years later when I was abruptly yanked out of a conference by angry company executive. People today are talking all the time with less risk than ever before; practically every time I turn on the radio I hear some hacker talking.

Have you seen Dan Kaminsky on TV talking about PCI compliance requirements in cloud? He sounded like a happy fish completely out of water who didn’t care because hey TV appearance.

Some younger hackers I have met recently were positively thrilled by the idea that someday they too can be interviewed on TV about something they don’t research because, in the spirit of Dan, hey TV appearance. As one recently told me “I just read up on whatever they ask me to be an expert on as I sit in the waiting room”.

Anyway, public communication not really about finding courage to be turned into a big TV talking head, it’s about negotiating for best possible outcomes (public or private), which obviously are complicated by politics. Many hackers don’t want celebrity. They’re courageous because they refuse unwanted exposure and communicate in public but not publicly, if you know what I mean.

6. Hackers comprise a distinct community with its own ethics, morals, and values, many of which are tacit, but others that are enforced through self-policing.

False. Have you taken apart anything ever? Have you tried to make something better or even just peek inside to understand it? Congratulations you’ve entered a non-distinct community of hackers at some point in that process.

There is no bright line that qualifies hackers as a distinct community. It is not an “aha I’m now part of the hacker clan” moment. Even when hackers publish some version of ethics, morals, and values as a rough guide others tend to hack through them to fit better across a huge diversity in human experience.

Calling hacking distinct or exclusive really undermines the universality of curious exploration and improvement that benefits humans everywhere.

We might as well say political thinkers (politicians) compromise a distinct community; or people who study (students) are a distinct community. Such things can exist, yet really these are roles or phases and ways of thinking about finding solutions to problems. People easily move in and out of being politicians, students, hackers…

Real executive summary:

If there is anything Washington needs to understand it is everyone is to some degree a hacker and that’s a good thing.

Posted in Security.


Larger than Life ( Stawka większa niż życie )

Today in 1939 Hitler and Stalin signed the Molotov-Ribbentrop Treaty (non-agression pact) secretly dividing Poland. To add perspective I thought I would mention a classic spy video series that is not widely known outside Poland.

Polish television, from March 1967 to October 1968 (18 episodes), told the story of secret agent Stanisław Kolicki (codename J-23), who carried a secret mission in the Nazi army as Hans Kloss. Perhaps the most famous line of the protagonist is “Mow mi Janek”:

Call me Mike

Call me Mike

The series begins in 1941, two years after the Nazis and Soviets conspired to divide and conquer Poland. Episode one shows a young Pole, Stanislaw Kolicki, escape from Konigsberg camp on the Soviet side. He begins cooperating with Soviet intelligence by providing information about German troop concentration along the border. Soviet intelligence notices a confusing similarity, identical appearance, with a captured German Hans Kloss on the German side. Codename J-23 is born and Kolicki makes a daring run into German occupied territory. He begins organizing a counterintelligence network until the Gestapo become suspicious of radio communications and hunt him. He manages to fake his own death and escape back to the Soviet side. He then convinces Soviet intelligence to allow him to return. J-23 infiltrates the Abwehr again, this time as a “real” Lieutenant Kloss posted to Nazi military intelligence.

Posted in History, Security.


A Common Security Fallacy? Too Big to Fail (KISS)

Often I have journalists asking me to answer questions or send advice for a story. My reply takes a bit of time and reflection. Then, usually, although not always, I get an update something like this:

Loved what you had to say but had to cut something out. Editors, you know how it is. Had to make room for answers from my other experts…I’m sure you can understand. Look forward to hearing your answer next time

I DO understand. I see the famous names of people they’re quoting and the clever things they’re saying. They won, I lost. It happens. And then I started to wonder why not just publish my answers here too. That really was the point of having a blog. Maybe I should create a new category.

So without further ado, here’s something that I wrote that otherwise probably never will see the light of day:

Journalist: Tell me about a most common security fallacy

Me: let me start with a truism: KISS (keep it simple stupid)

this has always been true in security and will likely always be true. simpler systems are easier to secure because they are less sophisticated, more easily understood. complex systems tend to need to be broken down into bite-sited KISS and relationships modeled carefully or they’re doomed to unanticipated failures.

so the answer to one of most common security fallacies is…

too big to fail. also known as they’re big and have a lot to lose so they wouldn’t do the wrong thing. or there’s no way a company that big doesn’t have a lot of talent, so i don’t need to worry about security.

we’ve seen the largest orgs fail repeatedly at basic security (google, facebook, dropbox, salesforce, oracle!) because internal and external culture tends to give a pass on accountability. i just heard a journalist say giant anti-virus vendors would not have a back door because it would not be in their best interest. yet tell me how accountable they really are when they say “oops, we overlooked that” as they often do in their existing business model.

for a little historic context it’s the type of error made at the turn of the century with meat production in chicago. a book called “the jungle” pointed out that a huge fast-growth industrial giant could actually have atrocious safety, yet be protected by sheer size and momentum from any correction. it would take an object of equal or greater force (e.g. an authority granted by governance over a large population) to make an impact on their security.

so the saying should be “too big to be simple”. the larger an organization the more likely it could have hidden breaches or lingering risks, which is what we saw with heartland, tjx, target, walmart and so on. also the larger an organization the less likely it may have chemistry or incentives in place to do the right thing for customer safety.

there’s also an argument against being safe just because simple, but it is not nearly as common a fallacy.

Posted in History, Security.


Roll Your Own Kali 2.0 ISO

I noticed the good Kali folks have pre-released steps to make your own ISO for their upcoming 2.0 release.

# Workshop 01 – Rolling your own Kali 2.0 ISOs

I also noticed the steps do not work as written, mostly because files moved from archive to www. So here’s what worked for me:

Use existing Kali instance to prepare

$ sudo apt-get install live-build

This will install debootstrap 1.0.48+kali3, live-boot-doc 4.0.2-1, live-build 4.0.401kali7*, live-config-doc 4.0.2-1, and live-manual-html 1%3a3.0.2-1

Clone the builds

$ git clone git://git.kali.org/live-build-config.git
$ cd live-build-config

Add tools

$ echo “cryptsetup
> gparted
> amap” >> kali-config/variant-light/package-lists/kali.list.chroot

Enable SSH service at boot

$ echo ‘update-rc.d -f ssh enable’ >> kali-config/common/hooks/01-start-ssh.chroot
$ chmod 755 kali-config/common/hooks/01-start-ssh.chroot

Add your own public SSH key

$ mkdir -p kali-config/common/includes.chroot/username/.ssh/
$ cp ~/.ssh/id_rsa.pub kali-config/common/includes.chroot/username/.ssh/authorized_keys

Add unattented install option

$ vi kali-config/common/hooks/02-unattended-boot.binary

#!/bin/sh

cat >>binary/isolinux/install.cfg < label install
menu label ^Unattended Install
menu default
linux /install/vmlinuz
initrd /install/initrd.gz
append vga=788 -- quiet file=/cdrom/install/preseed.cfg locale=en_US keymap=us hostname=kali domain=local.lan
END

$ chmod 755 kali-config/common/hooks/02-unattended-boot.binary
$ ls -al kali-config/common/hooks/

Create the unattended seed

$ wget https://www.kali.org/dojo/preseed.cfg -O ./kali-config/common/includes.installer/preseed.cfg

Install wallpaper (BlackHat or DEFCON blue)

$ wget https://www.kali.org/dojo/wp-blue.png -O kali-config/common/includes.chroot/usr/share/images/desktop-base/kali-wallpaper_1920x1080.png

NOTE: the images/desktop-base directory has disappeared in later builds. just add it back in with mkdir

Build the ISO

$ ./build.sh –variant light –distribution sana –verbose

After successful build the live-build-config/images subdirectory will have a 900M “kali-linux-light-sana” iso file.


* NOTE: If you want to use another platform such as Ubuntu 14.04 you may find the usual package (sudo apt-get install live-build) causes problems. When you run the build.sh script it checks versions and fails like this:

ERROR: You need live-build (>= 4.0.4-1kali6), you have 3.0~a57-1ubuntu11.2

It should be possible to meet the dependencies and edit config files using the Debian live-build:

$ git clone git://live-systems.org/git/live-build.git

However because “kali” is specified in the live-build version check…after several attempts on other systems to work around I gave up and took the easy path — use an old kali system to build a new kali.


Updated to add: Rolling a trusted ISO is fun but obviously a docker pull is far easier and more risky. Note the need for signed repository images if you’re going this route instead.

  • docker pull kalilinux/kali-linux-docker
  • docker run -t -i kalilinux/kali-linux-docker
  • /bin/bash apt-get install metasploit-framework

Posted in Security.


Saving the Bobcat: Lessons in Segmentation and Surveillance

California has just passed a statewide law banning harm to the bobcat.

The decision of the Commission reflects a growing sensibility in this state that wildlife should not be stalked, trapped, shot, or beaten to death for sport or frivolous goods

The move came after it was revealed that attackers had advanced in two significant ways: monitoring the Internet to find targets and then using lures to pull the targets out of state parks where they were protected.

trappers monitor social media for wildlife lovers’ bobcat photos to determine where to set their traps

Bobcats under attack

The state finally was forced to react after 30,000 signatures called for action to deal with the obvious social harm. California decided to expand scope of protection from porous safe zones to the entire state.

Those familiar with PCI DSS compliance realize this is like a CIO agreeing to monitor every system under their authority for motivated attackers, instead of defining scope as only those few servers where PII should be found.

Justification of a statewide ban was based not just on evidence of attackers bypassing perimeters with ease. Conservationists pointed out that the authorities have failed to maintain any reasonable monitoring of harm to state assets.

[California] could not determine whether trapping jeopardized the species because they had no current scientific data

Thus we have an excellent study in nature of what we deal with constantly in infosec; a classic case of attackers adapting methods for personal gain while community/defenders are slow to build and examine feedback loops or reliable logs of harm.

Should it have taken 30,000 signatures before the state realized they had such obvious perimeter breaches?

Fortunately, bobcats now are protected better. The species will have a chance of survival, or at least protection from attack, as scientists figure out how best to design sustainable defenses.

Action taken sooner is far better than later. Once the species is driven to extinction it may be impossible to restore/recover, as has been the case with many other animals including the bear on the state flag.

Posted in Security.


Howto: Delete old Docker containers

I’ve been working quite a bit lately on a secure deletion tool for Docker containers. Here are a few notes on basic delete methods, without security, which hints at the problem.

  • List all current containers
  • $ docker ps -a

    CONTAINER ID  IMAGE        COMMAND   CREATED             STATUS                        PORTS  NAMES
    e72211164489  hello-world  "/hello"  About a minute ago  Exited (0) About a minute ago        ecstatic_goodall
    927e4ab62b82  hello-world  "/hello"  About a minute ago  Exited (0) About a minute ago        naughty_pasteur       
    d71ff26dbb90  hello-world  "/hello"  4 minutes ago       Exited (0) 4 minutes ago             hungry_wozniak        
    840279db0bd7  hello-world  "/hello"  5 minutes ago       Exited (0) 5 minutes ago             lonely_pare           
    49f6003093eb  hello-world  "/hello"  25 hours ago        Exited (0) 25 hours ago              suspicious_poincare   
    6861afbbab6d  hello-world  "/hello"  27 hours ago        Exited (0) 26 hours ago              high_carson           
    2b29b6d5a09c  hello-world  "/hello"  3 weeks ago         Exited (0) 3 weeks ago               serene_elion          
    
  • List just containers weeks old
  • $ docker ps -a | grep “weeks”

    CONTAINER ID  IMAGE        COMMAND   CREATED             STATUS                        PORTS  NAMES
    2b29b6d5a09c  hello-world  "/hello"  3 weeks ago         Exited (0) 3 weeks ago               serene_elion          
    
  • List all containers by ID
  • $ docker ps -a | grep ‘ago’ | awk ‘{print $1}’

    e72211164489  
    927e4ab62b82         
    d71ff26dbb90          
    840279db0bd7          
    49f6003093eb    
    6861afbbab6d         
    2b29b6d5a09c          
    
  • List all containers by ID, joined to one line
  • $ docker ps -a | grep ‘ago’ | awk ‘{print $1}’ | xargs

    e72211164489 927e4ab62b82 d71ff26dbb90 840279db0bd7 49f6003093eb 6861afbbab6d 2b29b6d5a09c          
    
  • List ‘hours’ old containers by ID, joined to one line, and if found prompt to delete them
  • $ docker ps -a | grep ‘hours’ | awk ‘{print $1}’ | xargs -r -p docker rm

    docker rm 49f6003093eb 6861afbbab6d ?...
    

    Press y to delete, n to cancel

Posted in Security.


Today in History: Antoine de Saint-Exupéry Disappears

On July 31 in 1944 Antoine de Saint-Exupéry flew a Lockheed Lightning P-38 on a morning reconnaissance mission, despite being injured and nearly ten years over the pilot age limit. It was the last day he was seen alive. A bracelet bearing his name was later found by a fisherman offshore between Marseille and Cassis, which led to discovery of the wreckage of his plane.

Saint-Exupéry was an unfortunate pilot with many dangerous flying accidents over his career. One in particular was during a raid, an attempt to set a speed record from Paris to Hanoï, Indochine and back to Paris. Winning would have meant 150K Francs. Instead Saint-Exupéry crashed in the Sahara desert.

Besides being a pilot of adventure he also was an avid writer and had studied drawing in a Paris art school. In 1942 he wrote The Little Prince, which has been translated into more than 250 languages and is one of the most well-known books in the world. Saint-Exupéry never received any of its royalties.

It brings to mind the rash of people now posting videos and asking their fans to pay to view/support their adventures.

Imagine if Saint-Exupéry had taken a video selfie of his crash and survival in the Sahara desert and posted it straight to a sharing site, asking for funds…instead of writing a literary work of genius and seeing none of its success.

Posted in History, Security.


Convert Kali Linux VMDK to KVM

I was fiddling around in Ubuntu 14.04 with Docker and noticed a Kali Linux container installation was just four steps:

$ wget -qO- https://get.docker.com/ | sh
$ docker pull kalilinux/kali-linux-docker
$ docker run -t -i kalilinux/kali-linux-docker /bin/bash
# apt-get update && apt-get install metasploit

This made me curious about comparing to the VM steps. Unfortunately they still only offer a VMDK version to play with. And this made me curious about how quickly I could convert to KVM.

On my first attempt I did the setup and conversion in eight (nine if you count cleanup):

  1. Install KVM
  2. $ sudo apt-get install qemu-kvm libvirt-bin ubuntu-vm-builder bridge-utils virt-goodies p7zipfull

  3. Download kali vmdk zip file
  4. $ wget http://images.kali.org/Kali-Linux-1.1.0c-vm-amd64.7z

    (Optional) Verify checksum is 1d7e835355a22e6ebdd7100fc033d6664a8981e0

    $ sha1sum Kali-Linux-1.1.0c-vm-amd64.7z

  5. Extract zip file
  6. $ 7z x Kali-Linux-1.1.0c-vm-amd64.7z
    $ cd Kali-Linux-1.1.0c-vm-amd64
    $ ll

    -rw------- 1 user user 3540451328 Mar 13 03:50 Kali-Linux-1.1.0c-vm-amd64-s001.vmdk
    -rw------- 1 user user 1016725504 Mar 13 03:50 Kali-Linux-1.1.0c-vm-amd64-s002.vmdk
    -rw------- 1 user user 1261895680 Mar 13 03:50 Kali-Linux-1.1.0c-vm-amd64-s003.vmdk
    -rw------- 1 user user 1094582272 Mar 13 03:50 Kali-Linux-1.1.0c-vm-amd64-s004.vmdk
    -rw------- 1 user user  637468672 Mar 13 03:50 Kali-Linux-1.1.0c-vm-amd64-s005.vmdk
    -rw------- 1 user user  779747328 Mar 13 03:50 Kali-Linux-1.1.0c-vm-amd64-s006.vmdk
    -rw------- 1 user user 1380450304 Mar 13 03:50 Kali-Linux-1.1.0c-vm-amd64-s007.vmdk
    -rw------- 1 user user    1376256 Mar 13 03:50 Kali-Linux-1.1.0c-vm-amd64-s008.vmdk
    -rw------- 1 root root        929 Mar 13 02:56 Kali-Linux-1.1.0c-vm-amd64.vmdk
    -rw-r--r-- 1 user user          0 Mar 13 05:11 Kali-Linux-1.1.0c-vm-amd64.vmsd
    -rwxr-xr-x 1 root root       2770 Mar 13 05:11 Kali-Linux-1.1.0c-vm-amd64.vmx*
    -rw-r--r-- 1 user user        281 Mar 13 05:11 Kali-Linux-1.1.0c-vm-amd64.vmxf
    
  7. Convert ‘vmdk’ to ‘qcow2′
  8. $ qemu-img convert -f vmdk Kali-Linux-1.1.0c-vm-amd64.vmdk -O qcow2 Kali-Linux-1.1.0c-vm-amd64.qcow2

  9. Change ownership
  10. $ sudo chown username:group Kali-Linux-1.1.0c-vm-amd64.qcow2

  11. Convert ‘vmx’ to ‘xml’
  12. $ vmware2libvirt -f Kali-Linux-1.1.0c-vm-amd64.vmx > Kali-Linux-1.1.0c-vm-amd64.xml

    (Note this utility was installed by virt-goodies. An alternative is to download just vmware2libvirt and run as “python vmware2libvirt -f Kali-Linux-1.1.0c-vm-amd64.vmx > Kali-Linux-1.1.0c-vm-amd64.xml”)

    (Optional) Create some uniqueness by replacing default values (e.g. mac address 00:0C:29:4B:9C:DF) in the xml file

    uuid
    $ uuidgen

    mac address
    $ echo 00:0C:$(dd if=/dev/urandom count=1 2>/dev/null | md5sum | sed ‘s/^\(..\)\(..\)\(..\)\(..\).*$/\1:\2:\3:\4/’)

    $ vi Kali-Linux-1.1.0c-vm-amd64.xml

  13. Create VM
  14. $ sudo ln -s /usr/bin/qemu-system-x86_64 /usr/bin/kvm
    $ virsh -c qemu:///system define Kali-Linux-1.1.0c-vm-amd64.xml

  15. Edit VM configuration to link new qcow2 file
  16. Find this section

    driver name='qemu' type='raw'
    source file='/path/Kali-Linux-1.1.0c-vm-amd64.vmdk'

    Change raw and vmdk to qcow2

    driver name='qemu' type='qcow2'
    source file='/path/Kali-Linux-1.1.0c-vm-amd64.qcow2'

  17. Start the VM
  18. $ virsh start Kali-Linux-1.1.0c-vm-amd64

  19. Delete vmdk
  20. $ rm *.v*

Posted in Security.


Howto: Install GPG on Jolla Sailfish OS

A Finnish start-up, Jolla, announced at the end of 2013 that it was producing a free and open source Sailfish OS, with an open hardware smart phone.

Here is a quick three-step guide to getting GPG installed.

STEP 1) install pinentry

You have three options:

  1. compile from source
  2. install pinentry-0.8.3-1.armv7hl.rpm
  3. use warehouse app to search for “pinentry” in OpenRepos, add “veskuh” repository and install gnupg-pinentry

STEP 2) open the terminal and install the GnuPG software

[nemo@Jolla ~]$ pkcon install gnupg2

Currently this installs version 2.0.4 with a home of ~/.gnupg

Supported algorithms:

    Pubkey: RSA, ELG, DSA
    Cipher: 3DES, CAST5, BLOWFISH, AES, AES192, AES256, TWOFISH
    Hash: MD5, SHA1, RIPEMD160, TIGER192, SHA256, SHA384, SHA512, SHA224
    Compression: ZIP, ZLIB, BZIP2

STEP 3) use the terminal to create a key

[nemo@Jolla ~]$ gpg2 –gen-key

Please select what kind of key you want:
   (1) DSA and Elgamal (default)
   (2) DSA (sign only)
   (5) RSA (sign only)
Your selection? [Enter]
DSA keypair will have 1024 bits.
ELG-E keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) [Enter]
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) [Enter]
Key does not expire at all
Is this correct? (y/N) y
You need a user ID to identify your key; the software constructs the 
user ID from the Real Name, Comment and Email Address in this form:
    "Heinrich Heine (Der Dichter) "
Real name: Davi Ottenheimer
Email address: davi@flyingpenguin.com
Comment:[Enter]
You selected this USER-ID:
    "Davi Ottenheimer davi@flyingpenguin.com"
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O

lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk
x Enter passphrase                            x
x                                             x
x                                             x
x Passphrase _***********_____________________x
x                                             x
x       OK           Cancel                   x
lqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqk

We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.

gpg: key XXXXXXXX marked as ultimately trusted
public and secret key created and signed.

gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u
pub 1024D/XXXXXXXX 2015-07-29
Key fingerprint = XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX
uid Davi Ottenheimer davi@flyingpenguin.com
sub 2048g/YYYYYYYY 2015-07-29

STEP 3.5) verify key

[nemo@Jolla ~]$ gpg2 -k

/home/nemo/.gnupg/pubring.gpg
-----------------------------
pub 1024D/XXXXXXXX 2015-07-29
uid Davi Ottenheimer davi@flyingpenguin.com
sub 2048g/YYYYYYYY 2015-07-29

NOTE: you may want to move and keep your secret key on a removable storage card

Posted in Security.


We Need a Digital Right to Repair

Dan Lyke asked me a good question today, in response to my Jeep of Death blog post and tweets about patching:

So yay for sharing, but we shouldn’t normalize getting your car patches from random Internet users.

On the one hand it would be easy to agree with Dan’s point. Randomness sounds scary and untrustworthy.

On the other hand, reality says doing safe business with random people might be a reasonable and normal state of affairs. I mean imagine getting a car chip update from a random vendor, or a part to fix your suspension or brakes. Imagine getting fuel from a random vendor.

Can trust or standards of care be established to allow randomness? Yes, obviously. Hello FTC.

My response to Dan was more brief than that response, because, well, Twitter:

why not? we get other “after market” fixes for cars all the time

This does not convince Dan, unfortunately. He asks an even scarier question:

would you run random executables emailed to you by internet strangers? On your car?

I try to explain again what I said before, that we enjoy a market full of randomness that our cars execute already (e.g. gasoline, diesel…steam, vegetable oil). And that is a good thing.

YES. because i have a digital right to repair, i would. i have been doing this on my diesel chip and motorcycles for a decade.

As far as I can tell Ducati was the first to allow after-market software patches on their engines, more than fifteen years ago. I owned a 2001 motorcycle that certainly allowed for it as I patched the ECU about every year, always after-market and sometimes with a random mechanic.

The idea that we should allow any patching process to be wholly controlled by vendors and not at all by consumers or independent mechanics sounds to me like a very dangerous imbalance.

Allow me to explain in more than 140 characters:

Having the right to repair is actually an ancient fight. Anyone familiar with American political history knows horror stories about Standard Oil, Ma’ Bell, let alone GM and Ford; monopolies that have tried to shut-down innovators. Or maybe I should invoke the angry Bill Gate’s hate letter to hobbyists?

Lessons learned from history can be plenty relevant to today’s dilemmas. Consider for example the Right to Repair legislation, that that I last blogged about in 2005, pushed by the late great Senator Wellstone.

wellstone

The argument made in 2001 by Senator Wellstone was manufacturers established “unfair monopoly” by locking away essential repair information, which prohibited independent mechanics from working on cars.

Wellstone’s ‘Motor Vehicle Owners’ Right to Repair Act’ Gives Vehicle Owners the Right to Choose Where, How and by Whom To Have Repairs and Parts of Their Choice. […] This legislation allows the vehicle owners — and not the car manufacturers — to own the repair and parts information on their personal property, this time their vehicles. It simply allows motoring consumers to have the ability to choose where, how and by whom to have their vehicles repaired and to choose the replacement parts of their choice — even to work on their own vehicles if they choose.

Opposition to the legislation was not only from the big companies that would have to share information with customers. Some outside the companies also argued against transparency and self (or at least independent) services. Believe it or not, for years statements were being made about protecting “high-tech” car security (e.g. passive anti-theft devices such as smart-key and engine immobilizer) with obfuscation.

Of course we know obfuscation to be a weak argument in information security, right? Put recent news about electronic key thieves in perspective of ConsumerReports arguing in the mid 2000s that obfuscation of key technology would better protect consumers from threats…

Well the fight against consumer right to repair cars dragged on and on until Massachusetts politicians broke through the nonsense in 2012 and passed H. 4362, a Right to Repair, which was seen as a compromise that car manufacturers could swallow.

Nearly thirteen years after Wellstone introduced his bill, an important federal step was taken towards normalizing random patches.

The long fight over “right to repair” seems to be nearing an end.

For more than a decade, independent car repair chains such as Jiffy Lube and parts retailers such as AutoZone have been lobbying for laws that would give them standardized access to the diagnostic tools that automakers give their franchised dealers.

Automakers have resisted, citing the cost of software changes required to make the information more accessible.

It was because of a mostly external benefit (consumers), with mostly internal cost (automakers), that regulators had to step in to balance the economics of repair information access. Wellstone was wise to recognize consumer safety from access to information, lower-cost and faster repairs to things they own, could be more beneficial to the auto industry than higher margins.

I attempted to translate this political theory into today’s terms by Tweeting at people for a Digital Right to Repair on Android phones.

years-for-fixes

Perhaps I see the parallels today because I ran security programs at Yahoo! for mobile a decade ago and noticed parallels back then.

Phone manufacturers were slow to push security updates. Consumers were slow to pull updates. It seemed, from a cost-effective risk management view, that allowing Right to Repair to hundreds of millions of consumers we essentially would grease the wheels of progress and improvements. We anticipated patches would roll sooner and where innovation was available, because knowledge.

In other words rules that prevented understanding internals of devices also stalled understanding how to repair. To me that is a very serious security calculation.

What industry needs to discuss specifically is whether the rules to prevent understanding will unreasonably prevent safety protections from forming. Withholding information may push consumers unwittingly into an expensive and dangerous risk scenario that easily could have been avoided. Who should be held responsible when that happens?

Looking forward, the economics of IoT patching (i.e. trillions of devices needing triage) begs why not enhance sharing to leverage local resources for partnership and innovations in self-defense. As we move towards more devices needing repair, I certainly hope we do not lose sight of Wellstone’s legacy and the lessons his Act has taught us.

Posted in History, Security.