Skip to content


2014 Things Expo: New Security Models for the Internet of Things

Thank you to my interactive audience at the 2014 Things Expo in NYC. Really appreciate everyone attending my “New Security Models for the Internet of Things” session to close out the conference. Excellent feedback and I am pleased to see such interest in security!

Please find a PDF version here.

Posted in Security.


Cyber-Colonialism and Beliefs About Global Development

Full disclosure: I spent my undergraduate and graduate degree time researching the ethics of intervention with a focus on the Horn of Africa. One of the most difficult questions to answer was how to define colonialism. Take Ethiopia, for example. It was never colonized and yet the British invaded, occupied and controlled it from 1940-1943 (the topic of my MSc thesis at LSE).

I’m not saying I am an expert on colonialism. I’m saying after many years of research including spending a year reading original papers from the 1940s British Colonial office and meeting with ex-colonial officers, I have a really good sense of how hard it is to become an expert on colonialism.

Since then, every so often, I hear someone in the tech community coming up with a theory about colonialism. I do my best to dissuade them from going down that path. Here came another opportunity on Twitter from Zooko:

This short post instantly changed my beliefs about global development. “The Dawn of Cyber-Colonialism” by @GDanezis

If nothing else, I would like to encourage Zooko and the author of “dawn of Cyber-Colonialism” to back away from simplistic invocations of colonialism and choose a different discourse to make their point.

Maybe I should start by pointing out an irony often found in the anti-colonial argument. The usual worry about “are we headed towards colonialism” is tied to some rather unrealistic assumptions. It is like a thinly-veiled way for someone to think out loud: “our technology is so superior to these poor savage countries, and they have no hope without us, we must be careful to not colonize them with it”.

A lack of self-awareness in commercial views is an ancient factor. John Stuart Mill, for example in the 1860s, used to opine that only through a commercial influence would any individual realize true freedom and self-governance; yet he feared colonialists could spoil everything through not restraining or developing beyond their own self-interests. His worry was specifically that colonizers did not understand local needs, did not have sympathy, did not remain impartial in questions of justice, and would always think of their own profits before development. (Considerations on Representative Government)

I will leave the irony of the colonialists’ colonialism lament at this point, rather than digging into what motivates someone’s concern about those “less-developed” people and how the “most-fortunate” will define best interests of the “less-fortunate”.

People tend to get offended when you point out they may be the ones with colonialist bias and tendencies, rather than those they aim to criticize for being engaged in an unsavory form of commerce. So rather than delve into the odd assumptions taken among those who worry, instead I will explore the framework and term of “colonialism” itself.

Everyone today hates, or should hate the core concepts of colonialism because the concept has been boiled down so much to be little more than an evil relic of history.

A tempting technique in discourse is to create a negative association. Want people to dislike something? Just broadly call it something they already should dislike, such as colonialism. Yuck. Cyber-colonialism, future yuck.

However, using an association to colonialism actually is not as easy as one might think. A simplified definition of colonialism tends to be quite hard to get anyone to agree upon. The subjugation of a group by another group through integrated domination might be a good way to start the definition. And just look at all the big words in that sentence.

More than occupation, more than unfair control or any deals gone wrong, colonialism is tricky to pin down because of elements of what is known as “colonus” and measuring success as agrarian rather than a nomad.

Perhaps a reverse view helps clarify. The exit-barrier to colonialism is not just a change in political and economic controls. Successful colonies are characterized by active infiltration by people who settle in and through persistent integration displace and deprive control of anyone they find in order to “gain” from their acquired assets. It is an act of displacement coupled with active and forced reprogramming, early explorations of corporations for profit.

Removing something colonus, therefore, is unlike removing elements performing routine work along commercial lines. Even if you fire bad processors/workers colonialism would remain. Instead removal means to untangle and reverse steps that were meant to output a new commercially-driven “civilization”. De-occupation is comparatively easy. Removing control, cancelling a deal or a contract, is also easy. De-colonization is hard.

If I must put it in terms of IT, hardware that actively tried to take control of my daily life and integrate into my processes that I have while reducing my control of direction is what we’re talking about. Not just a bad chip, it is an entire business process attack. It would be like someone infecting our storage devices with bitcoin mining code that they not only profit from but also use to permanently settle in our environment and prevent us from having a final say about our own destiny. Reformulating business processes is messy, far worse than a bug in any chip.

My study in undergraduate and graduate school really tried to make sense of the end of colonialism and the role of foreign influence in national liberation movements through the 1960s. This was not a study of a patching mechanism or a new source of materials. I never found, not even in the extensive work of European philosophers, a simple way to describe the very many facets of danger from always uninvited (or even sometimes invited) guests who were able to invade and completely run large complex organizations.

Perhaps now you can see the trouble with colonialism definitions.

Now take a look at this odd paraphrase of the Oxford Dictionary (presumably because the author is from the UK), used to setup the blog post called “The dawn of Cyber-Colonialism“:

The policy or practice of acquiring full or partial political control over another country’s cyber-space, occupying it with technologies or components serving foreign interests, and exploiting it economically.

Pardon my French but this is complete bullshit. Such a definition at face value is far too broad to be useful. Partial control over another country by occupying it with stuff to serve foreign interest and exploiting it sounds like what most would call imperialism at worst, commerce at best. I mean nothing in that definition says “another country” is harmed. Harm seems essential. Subjugation is harmful. That definition also doesn’t say anything about being opposed to control or occupation, let alone exploitation.

I’m not going to blow apart the definition bit-by-bit as much as I am tempted. It fails across multiple levels and I would love to destroy each.

Instead I will just point out that such a horrible definition would result in Ethiopia having to say it was colonized because of British 1940 intervention to remove Axis invaders and put Haile Selassie back into power. Simple test. That definition fails.

Let me cut right to the chase. As I mentioned at the start, those arguing that we are entering an era of cyber-colonialism should think carefully whether they really want to wade into the mess of defining colonialism. I advise everyone to steer clear and choose other pejorative and scary language to make a point.

Actually, I encourage them to tell us how and why technology commerce is bad in precise technical details. It seems lazy for people to build false connections and use association games to create negative feeling and resentment instead of being detailed and transparent in their research and writing.

On that note, I also want to comment on some of the technical points found in the blog claiming to see a dawn of colonialism:

What is truly at stake is whether a small number of technologically-advanced countries, including the US and the UK, but also others with a domestic technology industry, should be in a position to absolutely dominate the “cyber-space” of smaller nations.

I agree in general there is a concern with dominance, but this representation is far too simplistic. It assumes the playing field is made up of countries (presumably UK is mentioned because the blog author is from the UK), rather than what really is a mix of many associations, groups and power brokers. Google, for example, was famous in 2011 for boasting it had no need for any government to exist anymore. This widely discussed power hubris directly contradicts any thesis that subjugation or domination come purely from the state apparatus.

Consider a small number of technologically-advanced companies. Google and Amazon are in a position to absolutely dominate the cyber-space of smaller nations. This would seem as legitimate a concern as past imperialist actions. We could see the term “Banana Republic” replaced as countries become a “Search Republic”.

It’s a relationship fairly easy to contemplate because we already see evidence of it. Google’s chairman told the press he was proud of “Search Republic” policies and completely self-interested commerce (the kind Mill warned about in 1861): he said “It’s called capitalism

Given the mounting evidence of commercial and political threat to nations from Google, what does cyber-colonialism really look like in the near, or even far-off, future?

Back to the blog claiming to see a dawn of colonialism, here’s a contentious prediction of what cyber-colonialism will look like:

If the manager decides to go with modern internationally sourced computerized system, it is impossible to guarantee that they will operate against the will of the source nation. The manufactured low security standards (or deliberate back doors) pretty much guarantee that the signaling system will be susceptible to hacking, ultimately placing it under the control of technologically advanced nations. In brief, this choice is equivalent to surrendering the control of this critical infrastructure, on which both the economic well-being of the nation and its military capacity relies, to foreign power(s).

The blog author, George Danezis, apparently has no experience with managing risk in critical infrastructure or with auditing critical infrastructure operations so I’ll try to put this in a more tangible and real context:

Recently on a job in Alaska I was riding a state-of-the art train. It had enough power in one engine to run an entire American city. Perhaps I will post photos here, because the conductor opened the control panels and let me see all of the great improvements in rail technology.

The reason he could let me in and show me everything was because the entire critical infrastructure was shutdown. I was told this happened often. As the central switching system had a glitch, which was more often than you might imagine, all the trains everywhere were stopped. After touring the engine, I stepped off the train and up into a diesel truck driven by a rail mechanic. His beard was as long as a summer day in Anchorage and he assured me trains have to be stopped due to computer failure all the time.

I was driven back to my hotel because no trains would run again until the next day. No trains. In all of Alaska. America. So while we opine about colonial exploitation of trains, let’s talk about real reliability issues today and how chips with backdoors really stack up. Someone sitting at the keyboard can worry about resilience of modern chips all they want but it needs to be linked to experience with “modern internationally sourced computerized system” used to run critical infrastructure. I have audited critical infrastructure environments since 1997 and let me tell you they have a very unique and particular risk management model that would probably surprise most people on the outside.

Risk is something rarely understood from an outside perspective unless time is taken to explore actual faults in a big picture environments and the study of actual events happening now and in the past. In other words you can’t do a very good job auditing without spending time doing the audit, on the inside.

A manager going with a modern internationally sourced computerized system is (a) subject to a wide spectrum of factors of great significance (e.g. dust, profit, heat, water, parts availability, supply chains), and (b) worried about presence of backdoors for the opposite reason you might think ; they represent hope for support and help during critical failures. I’ll say it again, they WANT backdoors.

It reminds me of a major backdoor into a huge international technology company’s flagship product. The door suggested potential for access to sensitive information. I found it, I reported it. Instead of alarm by this company I was repeatedly assured I had stumbled upon a “service” highly desirable to customers who did not have the resources or want to troubleshoot critical failures. I couldn’t believe it. But as the saying goes: one person’s bug is another person’s feature.

To make this absolutely clear, there is a book called “Back Door Java” by Newberry that I highly recommend people read if they think computer chips might be riddled with backdoors. It details how the culture of Indonesia celebrates the backdoor as an integral element of progress and resilience in daily lives.

Cooking and gossip are done through a network of access to everyone’s kitchen, in the back of a house, connected by alley. Service is done through back, not front, paths of shared interests.

This is not that peculiar when you think about American businesses that hide critical services in alleys and loading docks away from their main entrances. A hotel guest in America might say they don’t want any backdoors until they realize they won’t be getting clean sheets or even soap and toilet-paper. The backdoor is not inherently evil and may actually be essential. The question is whether abuse can be detected or prevented.

Dominance and control is quite complex when you really look at the relationships of groups and individuals engaged in access paths that are overt and covert.

So back to the paragraph we started with, I would say a manager is not surrendering control in the way some might think when access is granted, even if access is greater than what was initially negotiated or openly/outwardly recognized.

With that all in mind, re-consider the subsequent colonization arguments given by “The dawn of Cyber-Colonialism

Not opting for computerized technologies is also a difficult choice to make, akin to not having a mobile phone in the 21st century. First, it is increasingly difficult to source older hardware, and the low demand increases its cost. Without computers and modern network communications is it also impossible to benefit from their productivity benefits. This in turn reduces the competitiveness of the small nation infrastructure in an international market; freight and passengers are likely to choose other means of transport, and shareholders will disinvest. The financial times will write about “low productivity of labor” and a few years down the line a new manager will be appointed to select option 1, against a backdrop of an IMF rescue package.

That paragraph has an obvious false choice fallacy. The opposite of granting access (prior paragraph) would be not granting access. Instead we’re being asked in this paragraph to believe the only other choice is lack of technology.

Does anyone believe it increasingly is difficult to source older hardware? We are given no reason. I’ll give you two reasons how old hardware could be increasingly easy to source: reduced friction and increased privacy.

About 20% of people keep their old device because it’s easier than selling it. Another 20% keep their device because privacy concerns. That’s 40% of old hardware sitting and ready to be used, if only we could erase the data securely and make it easy to exchange for money. SellCell.com (trying to solve one of the problems) claims the source of older cellphone hardware in America alone now is about $47billion worth.

And who believes that low demand increases cost? What kind of economic theory is this?

Scarcity increases cost, but we do not have evidence of scarcity. We have the opposite. For example, there is no demand for the box of Blackberry phones sitting on my desk.

Are you willing to pay me more for a Blackberry because low demand?

Even more suspect is a statement that without computers and modern network communications it is impossible for a country to benefit. Having given us a false choice fallacy (either have the latest technology or nothing at all) everyone in the world who doesn’t buy technology is doomed to fail and devalue their economy?

Apply this to ANY environment and it should be abundantly clear why this is not the way the world works. New technology is embraced slowly, cautiously (relative terms) versus known good technology that has proven itself useful. Technology is bought over time with varying degrees of being “advanced”.

To further complicate the choice, some supply chains have a really long tail due to the nature of a device achieving a timeless status and generating localized innovation with endless supplies (e.g. the infamous AK-47, classic cars).

To make this point clearer, just tour the effects of telecommunications providers in countries like South Africa, Brazil, India, Mexico, Kenya and Pakistan. I’ve written about this before on many levels and visited some of them.

I would not say it is the latest or greatest tech, but tech available, which builds economies by enabling disenfranchised groups to create commerce and increase wealth. When a customer tells me they can only get 28.8K modem speeds I do not laugh at them or pity them. I look for solutions that integrate with slow links for incremental gains in resilience, transparency and privacy. When I’m told 250ms latency is a norm it’s the same thing, I’m building solutions to integrate and provide incremental gains. It’s never all-or-nothing.

A micro-loan robot in India that goes into rough neighborhoods to dispense cash, for example, is a new concept based on relatively simple supplies that has a dramatic impact. Groups in a Kenyan village share a single cell-phone and manage it similarly to the old British phone booth. There are so many more examples, none of which break down in simple terms of the amazing US government versus technologically-poor countries left vulnerable.

And back to the blog paragraph we started with, my guess is the Financial Times will write about “productivity of labor” if we focus on real risk, and a few years down the line new managers will be emerging in more places than ever.

Now let’s look at the conclusion given by “The dawn of Cyber-Colonialism

Maintaining the ability of western signals intelligence agencies to perform foreign pervasive surveillance, requires total control over other nations’ technology, not just the content of their communication. This is the context of the rise of design backdoors, hardware trojans, and tailored access operations.

I don’t know why we should believe anything in this paragraph. Total control of technology is not necessary to maintain the ability of intelligence. That defies common sense. Total control is not necessary to have intelligence be highly effective, nor does it mean intelligence will be better than having partial or incomplete control (as explained best by David Hume).

My guess is that paragraph was written with those terms because they have a particular ring to them, meant to evoke a reaction rather than explain a reality or demonstrate proof.

Total control sounds bad. Foreign pervasive surveillance sounds bad. Design backdoors, Trojan horses and tailored access (opposite of total control) sound bad. It all sounds so scary and bad, we should worry about them.

But back to the point, even if we worry because such scary words are being thrown at us about how technology may be tangled into a web of international commerce and political purpose, nothing in that blog on “cyber-colonialism” really comes even close to qualify as colonialism.

Posted in History, Security.


US Wants to Help Africa on the Rise

I am happy to see Secretary of State, John Kerry, saying in the Washington Post that America needs to help Africa with difficult decisions that lie ahead:

The best untold story of the last decade may be the story of Africa. Real income has increased more than 30 percent, reversing two decades of decline. Seven of the world’s 10 fastest-growing economies are in Africa, and GDP is expected to rise 6 percent per year in the next decade. HIV infections are down nearly 40 percent in sub-Saharan Africa and malaria deaths among children have declined 50 percent . Child mortality rates are falling, and life expectancy is increasing.

Reading between the lines Kerry seems to be watching America lose influence at a time when it should be pulled in by the Africans. He is advising Americans to start thinking of Africa in broader terms of partnership rather than just a place to impose pentagon-led “protective” objectives (e.g. stability for corporate margins, access to infrastructure projects for intel to chase and find our enemies, humanitarian assistance to verify our intel access to infrastructure is working).

A shift from pentagon objectives to state department ones, unless I’m being naive there still exists a significant difference, sounds like a good idea. Kerry does not back away from highlighting past American efforts as he moves towards imposing an American view of how to measure success:

The U.S. government has invested billions of dollars in health care, leading to real progress in combating AIDS and malaria. Our security forces work with their African counterparts to fight extremism. U.S. companies are investing in Africa through trade preferences under the African Growth and Opportunity Act. As a friend, the United States has a role to play in helping Africans build a better future.

Many of the choices are crystal clear. African leaders need to set aside sectarian and religious differences in favor of inclusiveness, acknowledge and advocate for the rights of women and minorities, and they must accept that sexual orientation is a private matter. They must also build on their economic progress by eliminating graft and opening markets to free trade.

I am not sure these two things are compatible if Africa is looking to find the best partner for decisions ahead. To put it another way, has America proven itself a help or a hindrance for the past decade with humanitarian issues? How does it advocate for rights of women and minorities yet send drones with a high civilian casualty rate? The fundamental question of how to reconcile offers of assistance with foreign strings and caveats seems underplayed.

My experience in Africa is the Chinese and Saudis push much more aggressive assistance programs with tangible results, everywhere from power plants and water supplies to schools and hospitals, without overt pressure on values alignment. Whereas a Saudi hospital might require women to cover their skin, which seems to America a horrible insult to women, Africans treat this a minor and perhaps even amusing imposition to disobey. Meanwhile an American hospital where anonymity is impossible and patients are said to be removed without warning and “disappeared”…creates an environment of resistance.

Allow me to relate a simple example of how the US might be able to both provide assistance while also find values alignment:

Global efforts to help malaria in Africa are less likely to fail because of the complicated nature of the disease and rather because of fraud. Kerry calls it graft. In Africa there is an unbelievable amount of graft tied directly to humanitarian efforts and I doubt there is anyone in the world who would say fraud is necessary or good.

I have run the stats given to me by leaders of humanitarian projects and I even have toured some developments on the ground. Conclusions to me seem rather obvious. Since 1989 my studies of humanitarian/ethical intervention in Africa, particularly the Horn, have looked into reasons for failure and one universal truth stands out. Graft shows up as a core issue blocking global efforts to help Africa yet I’m not sure anyone who isn’t looking for it already really notices. Here’s a typical story that tends to have no legs:

The Global Fund to Fight AIDS, Tuberculosis and Malaria has suspended funding for two malaria grants in Mali and terminated a third for tuberculosis (TB) after finding evidence that $4 million has been embezzled, the organisation said on Tuesday.

Grants to Mali and four other countries – Ivory Coast, Djibouti, Mauritania and Papua New Guinea – have been put under closer scrutiny with tighter restrictions on cash movements.

[…]

The suspensions in Mali concern a $14.8 million malaria grant to buy and distribute insecticide-treated nets for pregnant women and young children; a $3.3 million grant for anti-malaria drugs; and a $4.5 million TB grant targeted at treating prisoners, people in mining communities and patients with multidrug resistant strains of TB among others.

Please verify and see for yourself. Seek answers why assistance declines in areas most in need or is rejected despite demand increasing. Disease can not be eradicated if we back away when economic friction heats up. You may find, as I did, that projects stall when we can not detect supply chain threats, report vulnerabilities and enforce controls.

On the flip side of this issue, imposing a solution from top-down or outside only exacerbates the problem. Nobody wants an outsider to come in and accuse insiders of fraud if there exists any internal methods. Outside pressure can shut down the relationship entirely and block all access, as well as undermine internal footholds, which is why you rarely find diplomats and humanitarian project leaders touching on this issue.

I have proposed technical solutions to solve some of these supply chain issues blocking Africa’s “rise” although I doubt anyone is in a rush to implement them because politically the problem has been allowed to trundle along undefined. I am glad Kerry mentioned it as a footnote on America’s plan as it needs to be picked up as more of a headline. It would be great to see “America helps Senegal reduce fraud in fight to eradicate disease” for example. Until someone like Bill Gates says the problem we must overcome is weak systems that allow graft, we could just keep pumping assistance and yet see no gain or even see reversals. In fact Africa could distance itself from America if our aid goes misdirected while we attempt to impose our broader set of values on those who are not receiving any benefits.

American leaders now may want to help Africa rise and they have to find ways to operate in a market that feels more like a level playing field. We need to step in more as peers in a football match, rather than flood the field with referees, showing how we have solved similar problems while empowering local groups to solve them in ways that may be unfamiliar to us. Once we’re following a similar set of rules with a clear opponents like fraud and malaria, we need to find ways to pass the ball and score as a team. Could Kerry next be talking about delivering solutions integrated with African values rather than pushing distinctly American ones as preconditions?

IT is a perfect fit here because it can support peer-based coalitions of authorities to operate on infrastructure without outside controls. Imagine a network of nodes emerging in Africa the same way the Internet first evolved in America, but with modern technology that is energy efficient, mobile and wireless.

A system to detect, report and block graft on a locally derived scale instead of promoting a centralized top-down monitoring system seems unlikely to happen. Yet that could be exactly what will make America a real partner to Africa’s rise. It begs the question whether anyone has positioned to deliver NFC infrastructure for nets and vaccines while also agreeing to step back, giving shared authority and responsibility to track progress with loose federation. America could be quite a help, yet also faces quite a challenge. Will Kerry find a way for Africans to follow a path forged by America in ways he may not be able to control?

Posted in Energy, History, Security.


2014 Österreich Stammtisch: The UnCERTainty of Attribution

I have been asked to post a copy of my presentation at the Stammtisch in Vienna. Vielen Dank an alle fürs Kommen und für die ausgezeichnete Diskussion.

Please find a PDF version here.

Posted in Security.


2014 SOURCE Boston: Delivering Security at Big Data Scale

Several people asked for a copy of my presentation slides at the April 2014 SOURCE conference in Boston. Thanks to everyone for coming and the great feedback!

Please find a PDF version here.

Posted in Security.


Yet “Unother” heartbleed Perspective (YUhP)

With so many people talking about heartbleed and offering their insights (e.g. excellent posts from Bruce Schneier and Dan Kaminsky) I could not help but add my own. That is not entirely true. I was happy to let others field questions and then reporters contacted me and wanted insights. After I sent my response they said my answers were helpful, so I thought I might as well re-post here.

So this is what I recently sent a reporter:

What is Heartbleed?

Heartbleed is a very small change made to small part of code that is widely used to protect data. You might even say it is a flaw found in the infrastructure that we all rely upon for privacy. It is not an understatement to say it impacts just about everyone who has a password on the Internet. It’s basically like discovering all your conversations for the past two years that you thought were private actually could have been heard by someone without any effort. This is very dangerous and why it had to be fixed immediately. Potential for harm can be huge when trusted systems have been operating with a flaw. It is hard to quantify who really has been impacted, however, because the damage is a leak rather than an outage. We could look for evidence of leaks now, because people trying to take advantage of the leak will leave particular tracks behind, but it is very unlikely tracks will have been preserved for such a long time, since the code change was made. The change unfortunately was not recognized as dangerous until very recently.

How is it related to encryption and the websites I use?

The simple explanation is that encryption was used on websites (as well as many other sites, including email) to protect data. Encryption can prevent someone from seeing your private information. A change in code, known as OpenSSL, used for encryption actually undermined its ability to protect your data. Heartbleed means someone from a remote location can see data that was believed and intended to be private. Your password, for example, would have been seen by someone who knew of this OpenSSL flaw.

If possible, how can I protect myself now that it’s happened?

You can protect yourself going forward with two simple steps. First verify the sites you use have fixed the heartbleed flaw. Often they will push a notice saying they have addressed the problem, or they will post a notice that is easy to find on their site, or you can consult a list of sites that have been tested. Second, change your passwords.

Another way to protect yourself is to get involved in the debate. You could study the political science behind important decisions, such as when and how to trust changes, or the economics of human behavior. You also could study the technical details of the code to join the debate on how best to improve the quality that everyone may rely upon for their most trusted communication.

The reach of this story is amazing to me. It is like information security just became real for every person in every home. I sat on a bench in an airport the other day and listened to everyone around me give their (horribly incorrect) versions of heartbleed. Some thought it was a virus. Some thought it was related to Windows XP. But whatever they said, it was clear they suddenly cared a lot about whether and how they can trust technology.

I was probably too bullish on the traces/trail part of my answer. It is hard to stay high level while still figuring out some of the details underneath. I haven’t yet built a good high-level explanation for why the attack is not detectable by the system itself but that attack traffic has some obvious characteristics that can be captured by the system.

Anyway, this clearly boils down to code review. It is a problem as old as code itself. A luminary in the infosec space recently yelled the following on this topic:

THIS IS CALLED A BOUNDS CHECK. I SAW CODING ERRORS LIKE THIS IN THE 70’S

We know there are people very motivated to pore over every memcpy in an OpenSSL codebase, for example, to look for flaws. Some say the NSA would have found it and used it, but in reality the threat-scape is far larger and the NSA officially has denied awareness.

We also know that finding a really bad bounds check does not necessarily mean any obligation to report it in a “fair” way that minimizes harm, which is a harsh reality that begs the question of human behavior. Before looking to deeply at the technical aspects of the problem, consider Bruce’s perspective:

This may be a massive computer vulnerability, but all of the interesting aspects of it are human.

If you are Bruce, of course you would say that. He finds human aspects interesting, with all due respect, because it is the less familiar angle to him — the focus of his new research. However, most people are unfamiliar with technology, let alone encryption and keys, so the human aspects are the less interesting angle and they want the technical problem explained. XKCD sees this latter demand. That’s why we now have the following beautiful explanation of technical aspects:

heartbleed

With that in mind I still actually agree with Bruce. The industry really needs to dig deep into the following sequence of events related to trusted infrastructure change controls and bug discovery. This is what I’ve gathered so far. We may learn more and revise this in the future, but I hope it helps illustrate the sort of interesting human aspects to sort out:

  1. OpenSSL code that created heartbleed is committed an hour before midnight on New Years Eve 2011
  2. Codenomicon in early 2014 started tested an alpha piece of their Defensics fuzz product called Safeguard — their automated tool in April finds a flaw in the authentication layer of OpenSSL messaging
  3. Flaw is privately communicated by Codenomicon to CERT in Finland
  4. Someone at Google finds the same flaw and notifies OpenSSL
  5. OpenSSL issues a patch and flaw goes public, leaving many scrambling to respond on an immediate basis

One other thought. I alluded to this in my answer to the journalist but I want to make a finer point here. Some are calling this a maximum level danger event. That perspective begs whether data being destroyed, changed or denied could ever been seen as more dangerous. To a cryptography community the 11 out of 10 event may be different to the availability community. That actually seems to be one of the reasons I have seen management allow encryption to fail — availability risks were seen as more dangerous than confidentiality risks when unfortunately there was a trade-off.

Updated to add: Google staff have started to actively dispute claims anyone found the bug earlier than their company did. Microsoft and Facebook offered money to the Google person who proved the bug to them first, but the money was redirected to a charity rather than accepted.

A timeline favoring Google’s interpretation of events, with the vague discovery listed as “March 21 or before,” has been published by a paper in Sydney. Note the author request:

If you have further information or corrections – especially information about what occurred prior to March 21 at Google – please email the author…

This makes it sound like Google needs help to recall events prior to March 21, or doesn’t want to open up. Codenomicon claims were that it had been testing months prior to discovery. In any case, everything seems to initiate around September 2013, probably not by coincidence and begging the question of human issues more than technical ones.

Posted in Security.


Troubled Audit Waters: Trustwave and the Target Breach

My last post is probably overkill on the Microsoft topic so here’s a TL;DR version of one aspect of that story.

Microsoft mentions an independent auditor will help them avoid risk in the future. In order to not violate privacy of their customers without due cause, they will ask a specific 3rd party attorney of their choosing for opinion on the matter.

That does not give me much confidence. It seems only slightly less likely to fail, at least in obvious terms of independence.

Take a look at an important related story in the news: Target’s QSA (qualified security assessor) Trustwave, who was meant to help stop privacy violation of payment cardholders, is being sued by banks.

There are two parts to the story. One is that an assessor is in a complicated responsibility dance with their client. Did the client fail in their burden to disclose details to the assessor? Did the assessor fail to notice this failure? Did the assessor intentionally overlook failures? The debate over these problems is ancient and the lawsuits are likely to draw from a large body of knowledge, driven in some part by the insurance industry.

The other part of the story is that Trustwave apparently was running a portion of security operations at Target, not just assessing them for adequacy of controls. This is the more interesting angle to me because it seems like a relatively easy risk to avoid.

An assessor is meant to test controls in place. If the control in place is run by the same company as the one assessing its adequacy, then independence is dubious and a conflict-of-interest test is required.

For example, assessor Alice finds Retailer has inadequate IDS. Alice recommends Retailer replace existing and buy new IDS service from service provider Bob. Bob sets up IDS services and then Alice says Retailer has adequate IDS controls. Then Retailer is breached and people notice Alice and Bob work for the same company. Lawyers ask if Alice was conspiring with Bob to sell IDS and rubber-stamp assessments, without regard to actual compliance requirements.

Companies have internal auditors test internal controls all the time, so it’s not impossible or improbable to have a single authority sit above and manage both roles. Independence is best served transparently. However, one of the primary benefits of bringing in a 3rd party independent assessment is the most clear form of independence from any operational influences.

Bottom-line is Trustwave was known for selling services and assessing those services in order to maximize income opportunities and grow their practice size; they found a more lucrative but far less clean business model that now begs the question of adequate separations. If the Target investigations question the model then it could change the industry.


Update March 29: Trustwave’s CEO Robert McCullen has posted an announcement, specifically mentioning the conflict-of-interest issue.

In response to these legal filings, Trustwave would like to reassure our customers and business partners that these claims against Trustwave are without merit, and that we look forward to vigorously defending ourselves in court against these baseless allegations.

Contrary to the misstated allegations in the plaintiffs’ complaints, Target did not outsource its data security or IT obligations to Trustwave. Trustwave did not monitor Target’s network, nor did Trustwave process cardholder data for Target.

As I said, this is a key issue to watch in the dispute.

Posted in Security.


#Hotmailgate: Where Don’t You Want to Go Today?

I thought with all the opinions flowing about the Hotmail privacy incident I would throw my hat into the ring. Perhaps most notably Bruce Schneier has done an excellent job warning people not to believe everything Google or Facebook is saying about privacy.

Before I get to Bruce’s article (below) I’d like to give a quick summary of the details I found interesting when reading about the Microsoft Hotmail privacy incident.

How it Begins

We know, for example, that the story begins with a Microsoft employee named Alex Kibkalo who was given a less-than-stellar performance review. This employee, a Russian native who worked from both Russia and Lebanon, reacted unfavorably and stole Microsoft pre-release updates for Windows RT and an Activation Server SDK.

Russia? Lebanon?

Perhaps it is fair to say the software extraction was retaliatory, as the FBI claims, but that also seems to be speculation. He may have had other motives. Some could suggest Alex’s Russian/Lebanese associations could have some geopolitical significance as well, for example. I so far have not seen anyone even mention this angle to the story but it seems reasonable to consider. It also raises the thorny question of how rights differ by location and nationality, especially in terms of monitoring and privacy.

Microsoft Resources Involved

More to the point of this post, from Lebanon Alex was able to quickly pull the software he wanted off Microsoft servers to a Virtual Machine (VM) running in a US Microsoft facility. Apparently downloading software all the way to Lebanon would have taken too long so he remotely controlled a VM and leveraged high speeds and close proximity of systems within US Microsoft facilities.

Alex then moved the stolen software from the Microsoft internal VM to the Microsoft public Skydrive cloud-based file-sharing service. With the stolen goods now in a place easily accessible to anyone, he emailed a French blogger.

The blogger was advised to have a technical person use the stolen software to build a service that would allow users to bypass Microsoft’s official software activation. The blogger publicly advertised on eBay the activation keys for sale and sent an email, from a Hotmail account, to a technical person for assistance with the stolen software. This technical person instead contact Microsoft.

Recap

To recap, an internal Microsoft employee used a Microsoft internal VM and a Microsoft public file-sharing cloud to steal Microsoft assets.

He either really liked using Microsoft or knew that they would not notice him stealing.

The intended recipient of those assets also used a Microsoft public cloud email account to communicate with employee stealing software, as well as with a person friendly to Microsoft senior executives.

When All You Have is a Hammer

Microsoft missed several red flags. Their internal virtual environment as well as their public cloud clearly was not detecting a theft in progress. A poor-performance review could be tied to sensitivity of network monitoring, watching for movement of large assets or pointing to communication with other internal staff that may have been working on behalf of the employee. Absent more advanced detective capabilities, let alone preventive ones, someone like Alex moves freely across Microsoft resources to steal assets.

A 900-lb gorilla approach to this problem must have seemed like a good idea to someone in Microsoft management. I have heard people suggest a rogue legal staff member was driving the decisions, yet this doesn’t sound plausible.

Having worked with gigantic legal entities in these scenarios I suspect coordinated and top-down the investigation and legal teams. Ironically, perhaps the most damaging steps to customer trust might have been done by a team called Trustworthy Computer Investigations (TWCI). They asked the Office of Legal Compliance (OLC) for authorization to compromise customer accounts. That to me indicates the opposite of any rogue effort; it was a management-led mission based on an internal code-of-conduct and procedures.

Hotmail Broken

The real controversy should be that the TWCI target was not internal. Instead of digging around Microsoft’s own security logs and controls, looking at traces of employee activity for what they needed, Microsoft compromised a public customer Hotmail account (as well as a physical home) with the assistance of law enforcement in several countries. They found traces they were looking for in the home and Hotmail account; steps that explained how their software was stolen by an internal employee as well as signs of intent.

The moral of the story, unfortunately, seems to be Microsoft internal security controls were not sufficient on their own, in speed or cost or something else, which compelled the company to protect themselves with a rather overt compromise of customer privacy and trust. This naturally has led to a public outcry about whether anyone can trust a public cloud, or even webmail.

Microsoft, of course, says this case is the exception. They say they had the right under their service terms to protect their IP. These are hard arguments to dispute, since an employee stealing Microsoft IP and using Microsoft services, and even trying to sell the IP by contacting someone friendly with Microsoft, can not possibly be a normal situation.

On the other hand, what evidence do we have now that Microsoft would restrict themselves from treating public as private?

With that in mind, Microsoft has shown their hand; they struggle to detect or prevent IP-theft as it happens, so they clearly aim to shoot after-the-fact and as necessary. There seems to be no pressure to do things by any standard of privacy (e.g. one defined by the nationality of the customer) other than one they cook up internally weighted by their own best interests.

Note the explanation by their Deputy Counsel:

Courts do not issue orders authorizing someone to search themselves, since obviously no such order is needed. So even when we believe we have probable cause, it’s not feasible to ask a court to order us to search ourselves.

They appear to be defining customers as indistinguishable from Microsoft employees. If you are a Hotmail user, you are now a part of Microsoft’s corporate “body”. Before you send HR an email asking for healthcare coverage, however, note that they also distinguish Microsoft personal email from corporate email.

The only exception to these steps will be for internal investigations of Microsoft employees who we find in the course of a company investigation are using their personal accounts for Microsoft business.

So if I understand correctly Microsoft employees are allowed an illusion of distinguishing personal email on Hotmail from their business email, which doesn’t make any sense really because even public accounts on Hotmail are treated like part of corporate body. And there’s no protection from searches anywhere anyway. When Microsoft internal staff, and an external attorney they have hired, believe there is probable cause then they can search “themselves”.

And for good measure, I found a new Google statement that says essentially the same thing. They reserve the right to snoop public customer accounts, even journalists.

“[TechCrunch editor Michael Arrington] makes a serious allegation here — that Google opened email messages in his Gmail account to investigate a leak,” Kent Walker, Google general counsel, said in a statement. “While our terms of service might legally permit such access, we have never done this and it’s hard for me to imagine circumstances where we would investigate a leak in that way.”

Hard perhaps for Kent to imagine, but with nothing stopping them…is imagination really even relevant?

Back to Schneier

Given this story as background, I’d like to respond to Bruce Schneier’s excellent article with the long title: “Don’t Listen to Google and Facebook: The Public-Private Surveillance Partnership Is Still Going Strong

These companies are doing their best to convince users that their data is secure. But they’re relying on their users not understanding what real security looks like.

This I have to agree with. Reading the Microsoft story I first was shocked to hear they had cracked their own customer’s email account. Then after I read the details I realized they had probable cause and they followed procedures…until I reached the point where I realized there was nothing being said about real security. It begs a simple question:

Should the lack of Microsoft ability to detect or prevent a theft, utilizing their private and public services, a reasonable justification for very broad holes in customer terms-of-service?

Something Just Hit the Fan

Imagine you are sitting on a toilet in your apartment. That apartment was much more convenient to move into compared to building your own house. But then, suddenly the owner is standing over you. The owner says since they can’t tell when widgets are taken from their offices (e.g they can’t detect which of their employees might be stealing) and they have probable cause (e.g. someone says you were seen with a missing widget) they can enter your bathroom at any time to check.

Were you expecting privacy while you sat on your toilet in your apartment?

Microsoft clearly disagrees and says there’s no need to even knock since they’re entering their own bathroom…in fact, all the bathrooms are theirs and no-one should be able to lock them out. Enjoy your apartment stay.

Surveillance, Not Surveillance

Real security looks like the owners detecting theft or preventing theft in “their” space rather than popping “your” door open whenever they feel like it. I hate to say it this way but it’s a political problem, rather than a technical one: what guide should we use to do surveillance in places that are socially agreed-upon, such as watching a shared office to reduce risks of theft, rather than threaten surveillance in places people traditionally and reasonably expect privacy?

So here is where I disagree with Schneier

Google, and by extension, the U.S. government, still has access to your communications on Google’s servers. Google could change that. It could encrypt your e-mail so only you could decrypt and read it. It could provide for secure voice and video so no one outside the conversations could eavesdrop. It doesn’t. And neither does Microsoft, Facebook, Yahoo, Apple, or any of the others. Why not? They don’t partly because they want to keep the ability to eavesdrop on your conversations.

Ok, I actually sort of agree with that. Google could provide you with the ability to lock them out, prevent them from seeing your data. But saying they want to eavesdrop on your conversations is where I start to think differently from Bruce. They want to offer tailored services, marketing if you allow it. The issue is whether we must define an observation space for these tailored services as completely and always open (e.g. Microsoft’s crazy definition of everything as “self”) or whether there is room for privacy.

Give Me Private Cloud or Give Me Encryption…OK I’ll Take Both

Suddenly, and unexpectedly, I am seeing movement towards cloud encryption using private-keys unknown to the provider. Bruce says this is impossible because “the US government won’t permit it”. I disagree. For years I worked with product companies to create this capability and was often denied. But it was not based on some insidious back-door or government worry. Product managers had many reasons why they hated to allow encryption into the road-map and the most common was there simply was not enough demand from customers.

Ironically, the rise of isolated but vociferous demand actually could be the reason we now will see it happen. If Google and Apple move towards a private-key solution, even if only to fly the “we’re better than Microsoft flag,” only a fraction of users will adopt (there’s an unknown usability/cost factor here). And of those users that do adopt eagerly, what is the percentage that the government comes knocking for with a warrant or a subpoena to decrypt? Probably a high percentage, yet still a small population. Given that the cloud providers properly setup key management they should be able to tell the government they have no way to decrypt or access the data.

Economics to the Rescue

This means from a business view the cloud provider could improve their offering to customers by enhancing trust with privacy controls, while at the same time reducing a cost burden of dealing with government requests for data. It could be a small enough portion of the users it wouldn’t impact services offered to the majority of users. This balance also could be “nudged” using cost; those wanting enhanced privacy pay a premium. In the end, there would be no way a provider could turn over a key that was completely unknown to them. And if Bruce is right that the government gets in no matter what, then all the more reason for cloud providers to raise the bar above their own capabilities.

We should have been headed this way a long time ago but, as I’ve said, the product managers really did not believe us security folks when we begged, pleaded and even demanded privacy controls. Usability, performance and a list a mile long of priorities always came first. Things have changed dramatically in the past year and #Hotmailgate really shows us where we don’t want to go. I suspect Microsoft and its competitors are now contemplating whether and how to incorporate real private-key systems to establish better public cloud privacy options, given the new economic models and customer demands developing.

Posted in Security.


Mining and Visualizing YouTube Metadata for Threat Models

For several years I’ve been working on ways to pull metadata from online video viewers into threat models. In terms of early-warning systems or general trends, metadata may be a useful input on what people are learning and thinking about.

Here’s a recent example of a relationship model between viewers that I just noticed:

A 3D map (from a company so clever they have managed to present software advertisements as legitimate TED talks) indicates that self-reporting young viewers care more about sewage and energy than they care about food or recycling.

The graph also suggests video viewers who self-identify as women watch videos on food rather than energy and sewage. Put young viewers and women viewers together and you have a viewing group that cares very little about energy technology.

I recommend you watch the video. However, I ask that you please first setup an account with false gender to poison their data. No don’t do that. Yes, do…no don’t.

Actually what the TED talk reveals, if you will allow me to get meta for a minute, is that TED talks often are about a narrow band of topics despite claiming to host a variety of presenters. Agenda? There seem to be extremely few outliers or innovative subjects, according to the visualization. Perhaps this is a result of how the visual was created — categories of talks were a little too broad. For example, if you present a TED talk on password management and sharks and I present on reversing hardware and sharks, that’s both just interest in nature, right?

The visualization obscures many of the assumptions made by those who painted it. And because it is a TED talk we give up 7 minutes of our lives yet never get details below the surface. Nonetheless, this type of analysis and visualization is where we all are going. Below is an example from one of my past presentations, where I discussed capturing and showing high-level video metadata on attack types and specific vulnerabilities/tools. If you are not doing it already, you may want to think about this type of input when discussing threat models.

Here I show the highest concentrations of people in the world who are watching video tutorials on how to use SQL injection:

Posted in Energy, Food, Security.


What Surveillance Taught Me About the NSA and Tear Gas: It’s Time to Rethink our Twitters about Nightmares

Medium read: 23.45 minutes at 1024×768

Zeynep Tufekci has tweeted a link to a journal of her thoughts on surveillance and big data.

#longreads version of my core thesis: “Is the Internet Good or Bad? Yes.” I reflect on Gezi, NSA & more.

The full title of the post is “What tear gas taught me about Twitter and the NSA: It’s time to rethink our nightmares about surveillance.”

I noticed right away she used a humble brag to describe events at a recent conference she attended:

A number of high-level staff from the data teams of the Obama and Romney campaigns were there, which meant that a lot of people who probably did not like me very much were in the room.

You hate it when high-level people do not like you…? #highlevelproblems?

She then speculates on why she probably would not be liked by such high-level people. Apparently she has publicly caricatured and dismissed their work as “richer data for the campaigns could mean poorer democracy for the rest of us”. She expects them to not like her personally for this.

I said she speculates that she is not “liked” because she does not quote anyone saying they “did not like” her. Instead she says they have publicly dismissed her dismissal of their work.

My guess is she wants us to see the others as angry or upset with her personally to set the stage for us seeing her in the hot-seat as a resistance thinker; outnumbered and disliked for being right/good, she is standing up for us against teams of bipartisan evil data scientists.

Here is how she describes meeting with the Chief scientist on Obama’s data analytics team, confronting him with a hard-hitting ethical dilemma and wanting to tell him to get off the fence and take a stand:

I asked him if what he does now—marketing politicians the way grocery stores market products on their shelves—ever worried him. It’s not about Obama or Romney, I said. This technology won’t always be used by your team. In the long run, the advantage will go to the highest bidder, the richer campaign.

He shrugged, and retreated to the most common cliché used to deflect the impact of technology: “It’s just a tool,” he said. “You can use it for good; you can use it for bad.”

“It’s just a tool.” I had heard this many times before. It contains a modicum of truth, but buries technology’s impacts on our lives, which are never neutral. Often, I asked the person who said it if they thought nuclear weapons were “just a tool.”

The data scientist appears to say a decision on whether the tool is good or bad in the future is not up to him. It’s a reasonable answer. Zeynep calls this burying the truth, because technology is never neutral.

To be honest there is a part of me tempted to agree with her here. That would be a nice, quiet end to my blog post.

But I must go on…

Unfortunately I can not stop here because she does not end her post either. Instead, she goes on to apparently contradict her own argument on tools being non-neutral…and that just happens to be the sort of thing that drives me to write a response.

The reason I would agree with her is because I often am making this argument myself. It’s great to see it made by her. Just the other day I saw someone tweet that technology can’t be evil and I had to tweet back that some technology can be labeled evil. In other words a particular technology can be defined by social convention as evil.

This is different from the argument that technology can never be neutral, but it is similar. I believe much of it is neutral in a natural state and acquires a good/bad status depending on use, but there still are cases where it is inherently evil.

The philosophical underpinning of my argument is that society can choose to label some technology as evil when they judge no possible good that can outweigh the harm. A hammer and a kitchen knife are neutral. In terms of evil, modern society is reaching the highest levels of consensus when discussing cluster-bombs, chemical weapons, land-mines and even Zeynep’s example of nuclear weapons.

My keynote presentation at the 2011 RSA Conference in London used the crossbow as an example of the problem of consensus building on evil technology. 500 years ago the introduction of a simple weapon that anyone could easily learn meant a sea change in economic and political stability: even the most skilled swordsman no longer stood a chance against an unskilled peasant who picked up a crossbow.

You might think this meant revolution was suddenly in the hands of peasants to overthrow their king and his mighty army of swordsmen. Actually, imagine the opposite. In my presentation I described swordsmen who attempted to stage a coup against their own king. A quickly assembled army of mercenary-peasants was imported and paid to mow down revolutionary swords with crossbows. The swordsmen then would petition a religious leader to outlaw crossbows as non-neutral technology, inherently evil, and restore their ability to protect themselves from the king.

The point is we can have standards, conventions or regulations, that define technology as inherently evil when enough people agree more harm then good will always be the result of use.

Is the Internet just a tool?

With that in mind, here comes the contradiction and why I have to disagree with her. Remember, above Zeynep asked a data scientist to look into the future and predict whether technology is bad or good.

She did not accept leaving this decision to someone else. She did not accept his “most common cliché used to deflect the impact of technology”. And yet she says this:

I was asked the same question over and over again: Is the internet good or bad?

It’s both, I kept saying. At the same time. In complex, new configurations.

I am tempted to use her own words in response. This “contains a modicum of truth, but buries technology’s impacts on our lives, which are never neutral.” I mean does Zeynep also think nuclear weapons are “both good and bad at the same time, in complex, new configurations”? Deterrence was certainly an argument used in the past with exactly this sort of reasoning to justify nuclear weapons; they are bad but they are good so they really are neutral until you put them in the hands of someone.

And on and on and on…

The part of her writing I enjoy most is how she personalizes the experience of resistance and surveillance. It makes for very emotionally-charged and dramatic reading. She emphasizes how we are in danger of a Disney-esque perfect surveillance world. She tells us about people who, unable to find masks when they disagree with their government, end up puking from tear gas. Perhaps the irony between these two points is lost to her. Perhaps I am not supposed to see these as incongruous. Either way, her post is enlightening as a string of first-person observations.

The part of her writing I struggle most with a lack of political theory, let alone science. She does not touch on the essence of discord. Political science studies of violent protests around the world in the 1960s for example were keying in on the nature of change. Technology was a factor then also, and the time before and the time before, so a fundamental question is raised whether there are any lessons learned before. Maybe this is not the first time we’ve crossed this bridge.

Movements towards individualism, opportunity, creativity, and a true thinking and nourishing society appear to bring forth new technology, perhaps even more than new technology causes them. Just like the crossbow was developed to quickly reduce the ability of a swordsman to protect his interests, innovations in surveillance technology might have been developed to reduce the ability of a citizen to protect theirs. Unlike the crossbow, however, surveillance does not appear to be so clearly and consistently evil. Don’t get me wrong, more people than ever are working to classify uses of surveillance tools as evil. And some of it is very evil but not all of it.

Harder questions

Political science suggests there always is coercion in government. Most people do not mind some amount of coercion when it is exchanged for something they value. Then as this value shrinks, and progress towards a replacement value is not rapid enough, it generates friction and a return towards independence. So loss of independence theoretically can be a balance with some form of good.

It is obvious surveillance technology (e.g. Twitter) in many cases has found positive uses, such as monitoring health, natural disasters or accidents. It even can be argued political party hands have found beneficial uses for surveillance, such as fraud monitoring. The hard question is how to know when any act of surveillance, more than the latest technology, becomes evil by majority definition and what oversight is required to ensure we do not cross that point. She seems to suggest the individual is never safe:

[Companies and political parties] want us to click, willingly, on a choice that has been engineered for us. Diplomats call this soft power. It may be soft but it’s not weak. It doesn’t generate resistance, as totalitarianism does, so it’s actually stronger.

This is an oversimplified view of both company and political party relationships with individuals. Such an oversimplification makes it easy to “intertwine” concepts of rebellion and surveillance, and to reference diplomats as some sort of Machiavellian concept. The balance between state and individual is not inherently or always a form of deception to lull individuals into compliance without awareness of risks. There can actually be a neutral position, just as with technology.

What should companies and political parties offer us if not something they think we want? Should individuals be given choices that have not been engineered in any way? The act of providing a choice is often itself a form of engineering, as documented in elections with high rates of illiteracy (where candidates are “randomly” assigned an icon to represent them on ballots).

Should individuals be given a choice completely ignorant of our desires? That begs the very question of the market function and competition. It brings to mind Soviet-era systems that pretended to “ignore” desire in order to provide “neutral” choices by replacing it with centrally planned outcomes. We should think carefully about value offered to the individual by a government or a company and at what point value becomes “seduction” to maintain power through coercion.

Ultimately, despite having earlier criticized others for “retreating” to a neutral ground, her conclusion ends up in the same place:

Internet technology lets us peel away layers of divisions and distractions and interact with one another, human to human. At the same time, the powerful are looking at those very interactions, and using them to figure out how to make us more compliant.

In other words, Internet technology is neutral.

When we connect with others we may become more visible; the connection still has value when visibility is a risk. When we connect we may lose independence; the connection still has value when loss of independence is a risk.

It is disingenuous for us to label anyone that watches us as “the powerful” or to call ways that “make us more compliant” as inherently evil. Compliance can be a very good thing, obviously.

Zeynep offers us interesting documentation of first-person observations but offers little in the way of analysis and historical context. She also gives unfair treatment to basic political science issues and criticizes others before she seems to arrive at the same conclusion.

As others have said, it’s a “brilliant, profoundly disturbing piece”.

Posted in History, Security.