What Surveillance Taught Me About the NSA and Tear Gas: It’s Time to Rethink our Twitters about Nightmares

Medium read: 23.45 minutes at 1024×768

Zeynep Tufekci has tweeted a link to a journal of her thoughts on surveillance and big data.

#longreads version of my core thesis: “Is the Internet Good or Bad? Yes.” I reflect on Gezi, NSA & more.

The full title of the post is “What tear gas taught me about Twitter and the NSA: It’s time to rethink our nightmares about surveillance.”

I noticed right away she used a humble brag to describe events at a recent conference she attended:

A number of high-level staff from the data teams of the Obama and Romney campaigns were there, which meant that a lot of people who probably did not like me very much were in the room.

You hate it when high-level people do not like you…? #highlevelproblems?

She then speculates on why she probably would not be liked by such high-level people. Apparently she has publicly caricatured and dismissed their work as “richer data for the campaigns could mean poorer democracy for the rest of us”. She expects them to not like her personally for this.

I said she speculates that she is not “liked” because she does not quote anyone saying they “did not like” her. Instead she says they have publicly dismissed her dismissal of their work.

My guess is she wants us to see the others as angry or upset with her personally to set the stage for us seeing her in the hot-seat as a resistance thinker; outnumbered and disliked for being right/good, she is standing up for us against teams of bipartisan evil data scientists.

Here is how she describes meeting with the Chief scientist on Obama’s data analytics team, confronting him with a hard-hitting ethical dilemma and wanting to tell him to get off the fence and take a stand:

I asked him if what he does now — marketing politicians the way grocery stores market products on their shelves — ever worried him. It’s not about Obama or Romney, I said. This technology won’t always be used by your team. In the long run, the advantage will go to the highest bidder, the richer campaign.

He shrugged, and retreated to the most common cliche used to deflect the impact of technology: “It’s just a tool,” he said. “You can use it for good; you can use it for bad.”

“It’s just a tool.” I had heard this many times before. It contains a modicum of truth, but buries technology’s impacts on our lives, which are never neutral. Often, I asked the person who said it if they thought nuclear weapons were “just a tool.”

The data scientist appears to say a decision on whether the tool is good or bad in the future is not up to him. It’s a reasonable answer. Zeynep calls this burying the truth, because technology is never neutral.

To be honest there is a part of me tempted to agree with her here. That would be a nice, quiet end to my blog post.

But I must go on…

Unfortunately I can not stop here because she does not end her post either. Instead, she goes on to apparently contradict her own argument on tools being non-neutral…and that just happens to be the sort of thing that drives me to write a response.

The reason I would agree with her is because I often am making this argument myself. It’s great to see it made by her. Just the other day I saw someone tweet that technology can’t be evil and I had to tweet back that some technology can be labeled evil. In other words a particular technology can be defined by social convention as evil.

This is different from the argument that technology can never be neutral, but it is similar. I believe much of it is neutral in a natural state and acquires a good/bad status depending on use, but there still are cases where it is inherently evil.

The philosophical underpinning of my argument is that society can choose to label some technology as evil when they judge no possible good that can outweigh the harm. A hammer and a kitchen knife are neutral. In terms of evil, modern society is reaching the highest levels of consensus when discussing cluster-bombs, chemical weapons, land-mines and even Zeynep’s example of nuclear weapons.

My keynote presentation at the 2011 RSA Conference in London used the crossbow as an example of the problem of consensus building on evil technology. 500 years ago the introduction of a simple weapon that anyone could easily learn meant a sea change in economic and political stability: even the most skilled swordsman no longer stood a chance against an unskilled peasant who picked up a crossbow.

You might think this meant revolution was suddenly in the hands of peasants to overthrow their king and his mighty army of swordsmen. Actually, imagine the opposite. In my presentation I described swordsmen who attempted to stage a coup against their own king. A quickly assembled army of mercenary-peasants was imported and paid to mow down revolutionary swords with crossbows. The swordsmen then would petition a religious leader to outlaw crossbows as non-neutral technology, inherently evil, and restore their ability to protect themselves from the king.

The point is we can have standards, conventions or regulations, that define technology as inherently evil when enough people agree more harm then good will always be the result of use.

Is the Internet just a tool?

With that in mind, here comes the contradiction and why I have to disagree with her. Remember, above Zeynep asked a data scientist to look into the future and predict whether technology is bad or good.

She did not accept leaving this decision to someone else. She did not accept his “most common cliche used to deflect the impact of technology”. And yet she says this:

I was asked the same question over and over again: Is the internet good or bad?

It’s both, I kept saying. At the same time. In complex, new configurations.

I am tempted to use her own words in response. This “contains a modicum of truth, but buries technology’s impacts on our lives, which are never neutral.” I mean does Zeynep also think nuclear weapons are “both good and bad at the same time, in complex, new configurations”?

Deterrence was certainly an argument used in the past with exactly this sort of reasoning to justify nuclear weapons; they are bad but they are good so they really are neutral until you put them in the hands of someone.

And on and on and on…

The part of her writing I enjoy most is how she personalizes the experience of resistance and surveillance. It makes for very emotionally-charged and dramatic reading. She emphasizes how we are in danger of a Disney-esque perfect surveillance world. She tells us about people who, unable to find masks when they disagree with their government, end up puking from tear gas. Perhaps the irony between these two points is lost to her. Perhaps I am not supposed to see these as incongruous. Either way, her post is enlightening as a string of first-person observations.

The part of her writing I struggle most with a lack of political theory, let alone science. She does not touch on the essence of discord. Political science studies of violent protests around the world in the 1960s for example were keying in on the nature of change. Technology was a factor then also, and the time before and the time before, so a fundamental question is raised whether there are any lessons learned before. Maybe this is not the first time we’ve crossed this bridge.

Movements towards individualism, opportunity, creativity, and a true thinking and nourishing society appear to bring forth new technology, perhaps even more than new technology causes them. Just like the crossbow was developed to quickly reduce the ability of a swordsman to protect his interests, innovations in surveillance technology might have been developed to reduce the ability of a citizen to protect theirs. Unlike the crossbow, however, surveillance does not appear to be so clearly and consistently evil. Don’t get me wrong, more people than ever are working to classify uses of surveillance tools as evil. And some of it is very evil but not all of it.

Harder questions

Political science suggests there always is coercion in government. Most people do not mind some amount of coercion when it is exchanged for something they value. Then as this value shrinks, and progress towards a replacement value is not rapid enough, it generates friction and a return towards independence. So loss of independence theoretically can be a balance with some form of good.

It is obvious surveillance technology (e.g. Twitter) in many cases has found positive uses, such as monitoring health, natural disasters or accidents. It even can be argued political party hands have found beneficial uses for surveillance, such as fraud monitoring. The hard question is how to know when any act of surveillance, more than the latest technology, becomes evil by majority definition and what oversight is required to ensure we do not cross that point. She seems to suggest the individual is never safe:

[Companies and political parties] want us to click, willingly, on a choice that has been engineered for us. Diplomats call this soft power. It may be soft but it’s not weak. It doesn’t generate resistance, as totalitarianism does, so it’s actually stronger.

This is an oversimplified view of both company and political party relationships with individuals. Such an oversimplification makes it easy to “intertwine” concepts of rebellion and surveillance, and to reference diplomats as some sort of Machiavellian concept. The balance between state and individual is not inherently or always a form of deception to lull individuals into compliance without awareness of risks. There can actually be a neutral position, just as with technology.

What should companies and political parties offer us if not something they think we want? Should individuals be given choices that have not been engineered in any way? The act of providing a choice is often itself a form of engineering, as documented in elections with high rates of illiteracy (where candidates are “randomly” assigned an icon to represent them on ballots).

Should individuals be given a choice completely ignorant of our desires? That begs the very question of the market function and competition. It brings to mind Soviet-era systems that pretended to “ignore” desire in order to provide “neutral” choices by replacing it with centrally planned outcomes. We should think carefully about value offered to the individual by a government or a company and at what point value becomes “seduction” to maintain power through coercion.

Ultimately, despite having earlier criticized others for “retreating” to a neutral ground, her conclusion ends up in the same place:

Internet technology lets us peel away layers of divisions and distractions and interact with one another, human to human. At the same time, the powerful are looking at those very interactions, and using them to figure out how to make us more compliant.

In other words, Internet technology is neutral.

When we connect with others we may become more visible; the connection still has value when visibility is a risk. When we connect we may lose independence; the connection still has value when loss of independence is a risk.

It is disingenuous for us to label anyone that watches us as “the powerful” or to call ways that “make us more compliant” as inherently evil. Compliance can be a very good thing, obviously.

Zeynep offers us interesting documentation of first-person observations but offers little in the way of analysis and historical context. She also gives unfair treatment to basic political science issues and criticizes others before she seems to arrive at the same conclusion.

As others have said, it’s a “brilliant, profoundly disturbing piece”.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.