Category Archives: History

Cyber-Colonialism and Beliefs About Global Development

The Congo had 20 million people in 1885. Belgian King Leopold II then colonized it as his private white police state, which tortured and killed up to 10 million people.

Full disclosure: I spent my undergraduate and graduate degree time researching the ethics of intervention with a focus on the Horn of Africa. One of the most difficult questions to answer was how to define colonialism. Take Ethiopia, for example. It was never colonized and yet the British invaded, occupied and controlled it from 1940-1943 (the topic of my MSc thesis at LSE).

I’m not saying I am an expert on colonialism. I’m saying after many years of research including spending a year reading original papers from the 1940s British Colonial office and meeting with ex-colonial officers, I have a really good sense of how hard it is to become an expert on colonialism.

Since then, every so often, I hear someone in the tech community coming up with a theory about colonialism. I do my best to dissuade them from going down that path. Here came another opportunity on Twitter from Zooko:

This short post instantly changed my beliefs about global development. “The Dawn of Cyber-Colonialism” by @GDanezis

If nothing else, I would like to encourage Zooko and the author of “dawn of Cyber-Colonialism” to back away from simplistic invocations of colonialism and choose a different discourse to make their point.

Maybe I should start by pointing out an irony often found in the anti-colonial argument. The usual worry about “are we headed towards colonialism” is tied to some rather unrealistic assumptions. It is like a thinly-veiled way for someone to think out loud: “our technology is so superior to these poor savage countries, and they have no hope without us, we must be careful to not colonize them with it”.

A lack of self-awareness in commercial views is an ancient factor. John Stuart Mill, for example in the 1860s, used to opine that only through a commercial influence would any individual realize true freedom and self-governance; yet he feared colonialists could spoil everything through not restraining or developing beyond their own self-interests. His worry was specifically that colonizers did not understand local needs, did not have sympathy, did not remain impartial in questions of justice, and would always think of their own profits before development. (Considerations on Representative Government)

I will leave the irony of the colonialists’ colonialism lament at this point, rather than digging into what motivates someone’s concern about those “less-developed” people and how the “most-fortunate” will define best interests of the “less-fortunate”.

People tend to get offended when you point out they may be the ones with colonialist bias and tendencies, rather than those they aim to criticize for being engaged in an unsavory form of commerce. So rather than delve into the odd assumptions taken among those who worry, instead I will explore the framework and term of “colonialism” itself.

Everyone today hates, or should hate the core concepts of colonialism because the concept has been boiled down so much to be little more than an evil relic of history.

A tempting technique in discourse is to create a negative association. Want people to dislike something? Just broadly call it something they already should dislike, such as colonialism. Yuck. Cyber-colonialism, future yuck.

However, using an association to colonialism actually is not as easy as one might think. A simplified definition of colonialism tends to be quite hard to get anyone to agree upon. The subjugation of a group by another group through integrated domination might be a good way to start the definition. And just look at all the big words in that sentence.

More than occupation, more than unfair control or any deals gone wrong, colonialism is tricky to pin down because of elements of what is known as “colonus” and measuring success as agrarian rather than a nomad.

Perhaps a reverse view helps clarify. Eve Tuck wrote in “Decolonization is Not a Metaphor” that restoration from colonization means being made whole (restoration of ownership and control).

Decolonization brings about the repatriation of Indigenous land and life; it is not a metaphor for other things we want to do to improve our societies and schools.

The exit-barrier to colonialism is not just a simple change to political and economic controls, and it’s not a competitive gain, it’s undoing systemic wrongs to make things right.

After George Zimmerman unjustly murdered Trayvon Martin — illegally stole a man’s life and didn’t pay for it — the #blacklivesmatter movement was making the obvious case for black lives to be valued. Anyone arguing against such a movement that values human life, or trying to distract from it with whataboutism (trying to refocus on lives that are not black), perpetuates an unjust devaluation (illegal theft, immoral end) to black life.

Successful colonies thus can be characterized by an active infiltration by people who settle in with persistent integration to displace and deprive control; anyone they find is targeted in order to “gain” (steal) from their acquired assets. Women are raped, children are abused, men are tortured… all the while being told if they say they ask for equality, let alone reparations for loss, they are being greedy and will be murdered (e.g. lynched by the KKK).

It is an act of violent equity misdirection and permanent displacement coupled with active and forced reprogramming to accept severe and perpetual loss of rights as some kind of new norm (e.g. prison or labor camp). Early explorations of selfish corporations for profit gave little or nothing in return for their thefts, whenever they could find a powerful loophole like colonialism that unfairly extracted value from human life.

Removing something colonus, therefore, is unlike removing elements performing routine work along commercial lines. Even if you fire the bad workers, or remove toxic leadership, the effects of deep colonialism are very likely to remain. Instead, removal means to untangle and reverse steps that had created output under an unjust commercially-driven “civilization”; equity has to flow back to places forced to accept they would never be given any realization or control of their own value.

That is why something like de-occupation is comparatively easy. Even redirecting control, or cancelling a deal or a contract, is easy compared to de-colonization.

De-colonization is very hard.

If I must put it in terms of IT, hardware that actively tried to take control of my daily life and integrate into my processes that I have while reducing my control of direction is what we’re talking about. Not just a bad chip that is patched or replaced, it is an entire business process attack that requires deep rethinking of how gains/losses are calculated.

It would be like someone infecting our storage devices with bitcoin mining code or artificial intelligence (i.e. chatbot, personal assistant) that not only drive profits but also are used to permanently settle in our environment and prevent us from having a final say about our own destiny. It’s a form of blackmail, of having your own digital life ransomed to you.

Reformulating business processes is very messy, and far worse than fixing bugs.

My study in undergraduate and graduate school really tried to make sense of the end of colonialism and the role of foreign influence in national liberation movements through the 1960s.

This was not a study of available patching mechanisms or finding a new source of materials. I never found, not even in the extensive work of European philosophers, a simple way to describe the very many facets of danger from always uninvited (or even sometimes invited) selfish guests who were able to invade and then completely run large complex organizations. Once inside, once infiltrated, the system has to reject the thing it somehow became convinced it chose to be its leader.

Perhaps now you can see the trouble with colonialism definitions.

Now take a look at this odd paraphrase of the Oxford Dictionary (presumably because the author is from the UK), used to setup the blog post called “The dawn of Cyber-Colonialism“:

The policy or practice of acquiring full or partial political control over another country’s cyber-space, occupying it with technologies or components serving foreign interests, and exploiting it economically.

Pardon my French but this is complete bullshit. Such a definition at face value is far too broad to be useful. Partial control over another country by occupying it with stuff to serve foreign interest and exploiting it sounds like what most would call imperialism at worst, commerce at best. I mean nothing in that definition says “another country” is harmed. Harm seems essential. Subjugation is harmful. That definition also doesn’t say anything about being opposed to control or occupation, let alone exploitation.

I’m not going to blow apart the definition bit-by-bit as much as I am tempted. It fails across multiple levels and I would love to destroy each.

Instead I will just point out that such a horrible definition would result in Ethiopia having to say it was colonized because of British 1940 intervention to remove Axis invaders and put Haile Selassie back into power. Simple test. That definition fails.

Let me cut right to the chase. As I mentioned at the start, those arguing that we are entering an era of cyber-colonialism should think carefully whether they really want to wade into the mess of defining colonialism. I advise everyone to steer clear and choose other pejorative and scary language to make a point.

Actually, I encourage them to tell us how and why technology commerce is bad in precise technical details. It seems lazy for people to build false connections and use association games to create negative feeling and resentment instead of being detailed and transparent in their research and writing.

On that note, I also want to comment on some of the technical points found in the blog claiming to see a dawn of colonialism:

What is truly at stake is whether a small number of technologically-advanced countries, including the US and the UK, but also others with a domestic technology industry, should be in a position to absolutely dominate the “cyber-space” of smaller nations.

I agree in general there is a concern with dominance, but this representation is far too simplistic. It assumes the playing field is made up of countries (presumably UK is mentioned because the blog author is from the UK), rather than what really is a mix of many associations, groups and power brokers. Google, for example, was famous in 2011 for boasting it had no need for any government to exist anymore. This widely discussed power hubris directly contradicts any thesis that subjugation or domination come purely from the state apparatus.

Consider a small number of technologically-advanced companies. Google and Amazon are in a position to absolutely dominate the cyber-space of smaller nations. This would seem as legitimate a concern as past imperialist actions. We could see the term “Banana Republic” replaced as countries become a “Search Republic”.

It’s a relationship fairly easy to contemplate because we already see evidence of it. Google’s chairman told the press he was proud of “Search Republic” policies and completely self-interested commerce (the kind Mill warned about in 1861): he said “It’s called capitalism

Given the mounting evidence of commercial and political threat to nations from Google, what does cyber-colonialism really look like in the near, or even far-off, future?

Back to the blog claiming to see a dawn of colonialism, here’s a contentious prediction of what cyber-colonialism will look like:

If the manager decides to go with modern internationally sourced computerized system, it is impossible to guarantee that they will operate against the will of the source nation. The manufactured low security standards (or deliberate back doors) pretty much guarantee that the signaling system will be susceptible to hacking, ultimately placing it under the control of technologically advanced nations. In brief, this choice is equivalent to surrendering the control of this critical infrastructure, on which both the economic well-being of the nation and its military capacity relies, to foreign power(s).

The blog author, George Danezis, apparently has no experience with managing risk in critical infrastructure or with auditing critical infrastructure operations so I’ll try to put this in a more tangible and real context:

Recently on a job in Alaska I was riding a state-of-the art train. It had enough power in one engine to run an entire American city. Perhaps I will post photos here, because the conductor opened the control panels and let me see all of the great improvements in rail technology.

The reason he could let me in and show me everything was because the entire critical infrastructure was shutdown. I was told this happened often. As the central switching system had a glitch, which was more often than you might imagine, all the trains everywhere were stopped. After touring the engine, I stepped off the train and up into a diesel truck driven by a rail mechanic. His beard was as long as a summer day in Anchorage and he assured me trains have to be stopped due to computer failure all the time.

I was driven back to my hotel because no trains would run again until the next day. No trains. In all of Alaska. America. So while we opine about colonial exploitation of trains, let’s talk about real reliability issues today and how chips with backdoors really stack up. Someone sitting at the keyboard can worry about resilience of modern chips all they want but it needs to be linked to experience with “modern internationally sourced computerized system” used to run critical infrastructure. I have audited critical infrastructure environments since 1997 and let me tell you they have a very unique and particular risk management model that would probably surprise most people on the outside.

Risk is something rarely understood from an outside perspective unless time is taken to explore actual faults in a big picture environments and the study of actual events happening now and in the past. In other words you can’t do a very good job auditing without spending time doing the audit, on the inside.

A manager going with a modern internationally sourced computerized system is (a) subject to a wide spectrum of factors of great significance (e.g. dust, profit, heat, water, parts availability, supply chains), and (b) worried about presence of backdoors for the opposite reason you might think ; they represent hope for support and help during critical failures. I’ll say it again, they WANT backdoors.

It reminds me of a major backdoor into a huge international technology company’s flagship product. The door suggested potential for access to sensitive information. I found it, I reported it. Instead of alarm by this company I was repeatedly assured I had stumbled upon a “service” highly desirable to customers who did not have the resources or want to troubleshoot critical failures. I couldn’t believe it. But as the saying goes: one person’s bug is another person’s feature.

To make this absolutely clear, there is a book called “Back Door Java” by Newberry that I highly recommend people read if they think computer chips might be riddled with backdoors. It details how the culture of Indonesia celebrates the backdoor as an integral element of progress and resilience in daily lives.

Cooking and gossip are done through a network of access to everyone’s kitchen, in the back of a house, connected by alley. Service is done through back, not front, paths of shared interests.

This is not that peculiar when you think about American businesses that hide critical services in alleys and loading docks away from their main entrances. A hotel guest in America might say they don’t want any backdoors until they realize they won’t be getting clean sheets or even soap and toilet-paper. The backdoor is not inherently evil and may actually be essential. The question is whether abuse can be detected or prevented.

Dominance and control is quite complex when you really look at the relationships of groups and individuals engaged in access paths that are overt and covert.

So back to the paragraph we started with, I would say a manager is not surrendering control in the way some might think when access is granted, even if access is greater than what was initially negotiated or openly/outwardly recognized.

With that all in mind, re-consider the subsequent colonization arguments given by “The dawn of Cyber-Colonialism

Not opting for computerized technologies is also a difficult choice to make, akin to not having a mobile phone in the 21st century. First, it is increasingly difficult to source older hardware, and the low demand increases its cost. Without computers and modern network communications is it also impossible to benefit from their productivity benefits. This in turn reduces the competitiveness of the small nation infrastructure in an international market; freight and passengers are likely to choose other means of transport, and shareholders will disinvest. The financial times will write about “low productivity of labor” and a few years down the line a new manager will be appointed to select option 1, against a backdrop of an IMF rescue package.

That paragraph has an obvious false choice fallacy. The opposite of granting access (prior paragraph) would be not granting access. Instead we’re being asked in this paragraph to believe the only other choice is lack of technology.

Does anyone believe it increasingly is difficult to source older hardware? We are given no reason. I’ll give you two reasons how old hardware could be increasingly easy to source: reduced friction and increased privacy.

About 20% of people keep their old device because it’s easier than selling it. Another 20% keep their device because privacy concerns. That’s 40% of old hardware sitting and ready to be used, if only we could erase the data securely and make it easy to exchange for money. SellCell.com (trying to solve one of the problems) claims the source of older cellphone hardware in America alone now is about $47billion worth.

And who believes that low demand increases cost? What kind of economic theory is this?

Scarcity increases cost, but we do not have evidence of scarcity. We have the opposite. For example, there is no demand for the box of Blackberry phones sitting on my desk.

Are you willing to pay me more for a Blackberry because low demand?

Even more suspect is a statement that without computers and modern network communications it is impossible for a country to benefit. Having given us a false choice fallacy (either have the latest technology or nothing at all) everyone in the world who doesn’t buy technology is doomed to fail and devalue their economy?

Apply this to ANY environment and it should be abundantly clear why this is not the way the world works. New technology is embraced slowly, cautiously (relative terms) versus known good technology that has proven itself useful. Technology is bought over time with varying degrees of being “advanced”.

To further complicate the choice, some supply chains have a really long tail due to the nature of a device achieving a timeless status and generating localized innovation with endless supplies (e.g. the infamous AK-47, classic cars).

To make this point clearer, just tour the effects of telecommunications providers in countries like South Africa, Brazil, India, Mexico, Kenya and Pakistan. I’ve written about this before on many levels and visited some of them.

I would not say it is the latest or greatest tech, but tech available, which builds economies by enabling disenfranchised groups to create commerce and increase wealth. When a customer tells me they can only get 28.8K modem speeds I do not laugh at them or pity them. I look for solutions that integrate with slow links for incremental gains in resilience, transparency and privacy. When I’m told 250ms latency is a norm it’s the same thing, I’m building solutions to integrate and provide incremental gains. It’s never all-or-nothing.

A micro-loan robot in India that goes into rough neighborhoods to dispense cash, for example, is a new concept based on relatively simple supplies that has a dramatic impact. Groups in a Kenyan village share a single cell-phone and manage it similarly to the old British phone booth. There are so many more examples, none of which break down in simple terms of the amazing US government versus technologically-poor countries left vulnerable.

And back to the blog paragraph we started with, my guess is the Financial Times will write about “productivity of labor” if we focus on real risk, and a few years down the line new managers will be emerging in more places than ever.

Now let’s look at the conclusion given by “The dawn of Cyber-Colonialism

Maintaining the ability of western signals intelligence agencies to perform foreign pervasive surveillance, requires total control over other nations’ technology, not just the content of their communication. This is the context of the rise of design backdoors, hardware trojans, and tailored access operations.

I don’t know why we should believe anything in this paragraph. Total control of technology is not necessary to maintain the ability of intelligence. That defies common sense. Total control is not necessary to have intelligence be highly effective, nor does it mean intelligence will be better than having partial or incomplete control (as explained best by David Hume).

My guess is that paragraph was written with those terms because they have a particular ring to them, meant to evoke a reaction rather than explain a reality or demonstrate proof.

Total control sounds bad. Foreign pervasive surveillance sounds bad. Design backdoors, Trojan horses and tailored access (opposite of total control) sound bad. It all sounds so scary and bad, we should worry about them.

But back to the point, even if we worry because such scary words are being thrown at us about how technology may be tangled into a web of international commerce and political purpose, nothing in that blog on “cyber-colonialism” really comes even close to qualify as colonialism.

US Wants to Help Africa on the Rise

I am happy to see Secretary of State, John Kerry, saying in the Washington Post that America needs to help Africa with difficult decisions that lie ahead:

The best untold story of the last decade may be the story of Africa. Real income has increased more than 30 percent, reversing two decades of decline. Seven of the world’s 10 fastest-growing economies are in Africa, and GDP is expected to rise 6 percent per year in the next decade. HIV infections are down nearly 40 percent in sub-Saharan Africa and malaria deaths among children have declined 50 percent . Child mortality rates are falling, and life expectancy is increasing.

Reading between the lines Kerry seems to be watching America lose influence at a time when it should be pulled in by the Africans. He is advising Americans to start thinking of Africa in broader terms of partnership rather than just a place to impose pentagon-led “protective” objectives (e.g. stability for corporate margins, access to infrastructure projects for intel to chase and find our enemies, humanitarian assistance to verify our intel access to infrastructure is working).

A shift from pentagon objectives to state department ones, unless I’m being naive there still exists a significant difference, sounds like a good idea. Kerry does not back away from highlighting past American efforts as he moves towards imposing an American view of how to measure success:

The U.S. government has invested billions of dollars in health care, leading to real progress in combating AIDS and malaria. Our security forces work with their African counterparts to fight extremism. U.S. companies are investing in Africa through trade preferences under the African Growth and Opportunity Act. As a friend, the United States has a role to play in helping Africans build a better future.

Many of the choices are crystal clear. African leaders need to set aside sectarian and religious differences in favor of inclusiveness, acknowledge and advocate for the rights of women and minorities, and they must accept that sexual orientation is a private matter. They must also build on their economic progress by eliminating graft and opening markets to free trade.

I am not sure these two things are compatible if Africa is looking to find the best partner for decisions ahead. To put it another way, has America proven itself a help or a hindrance for the past decade with humanitarian issues? How does it advocate for rights of women and minorities yet send drones with a high civilian casualty rate? The fundamental question of how to reconcile offers of assistance with foreign strings and caveats seems underplayed.

My experience in Africa is the Chinese and Saudis push much more aggressive assistance programs with tangible results, everywhere from power plants and water supplies to schools and hospitals, without overt pressure on values alignment. Whereas a Saudi hospital might require women to cover their skin, which seems to America a horrible insult to women, Africans treat this a minor and perhaps even amusing imposition to disobey. Meanwhile an American hospital where anonymity is impossible and patients are said to be removed without warning and “disappeared”…creates an environment of resistance.

Allow me to relate a simple example of how the US might be able to both provide assistance while also find values alignment:

Global efforts to help malaria in Africa are less likely to fail because of the complicated nature of the disease and rather because of fraud. Kerry calls it graft. In Africa there is an unbelievable amount of graft tied directly to humanitarian efforts and I doubt there is anyone in the world who would say fraud is necessary or good.

I have run the stats given to me by leaders of humanitarian projects and I even have toured some developments on the ground. Conclusions to me seem rather obvious. Since 1989 my studies of humanitarian/ethical intervention in Africa, particularly the Horn, have looked into reasons for failure and one universal truth stands out. Graft shows up as a core issue blocking global efforts to help Africa yet I’m not sure anyone who isn’t looking for it already really notices. Here’s a typical story that tends to have no legs:

The Global Fund to Fight AIDS, Tuberculosis and Malaria has suspended funding for two malaria grants in Mali and terminated a third for tuberculosis (TB) after finding evidence that $4 million has been embezzled, the organisation said on Tuesday.

Grants to Mali and four other countries — Ivory Coast, Djibouti, Mauritania and Papua New Guinea — have been put under closer scrutiny with tighter restrictions on cash movements.

[…]

The suspensions in Mali concern a $14.8 million malaria grant to buy and distribute insecticide-treated nets for pregnant women and young children; a $3.3 million grant for anti-malaria drugs; and a $4.5 million TB grant targeted at treating prisoners, people in mining communities and patients with multidrug resistant strains of TB among others.

Please verify and see for yourself. Seek answers why assistance declines in areas most in need or is rejected despite demand increasing. Disease can not be eradicated if we back away when economic friction heats up. You may find, as I did, that projects stall when we can not detect supply chain threats, report vulnerabilities and enforce controls.

On the flip side of this issue, imposing a solution from top-down or outside only exacerbates the problem. Nobody wants an outsider to come in and accuse insiders of fraud if there exists any internal methods. Outside pressure can shut down the relationship entirely and block all access, as well as undermine internal footholds, which is why you rarely find diplomats and humanitarian project leaders touching on this issue.

I have proposed technical solutions to solve some of these supply chain issues blocking Africa’s “rise” although I doubt anyone is in a rush to implement them because politically the problem has been allowed to trundle along undefined. I am glad Kerry mentioned it as a footnote on America’s plan as it needs to be picked up as more of a headline. It would be great to see “America helps Senegal reduce fraud in fight to eradicate disease” for example. Until someone like Bill Gates says the problem we must overcome is weak systems that allow graft, we could just keep pumping assistance and yet see no gain or even see reversals. In fact Africa could distance itself from America if our aid goes misdirected while we attempt to impose our broader set of values on those who are not receiving any benefits.

American leaders now may want to help Africa rise and they have to find ways to operate in a market that feels more like a level playing field. We need to step in more as peers in a football match, rather than flood the field with referees, showing how we have solved similar problems while empowering local groups to solve them in ways that may be unfamiliar to us. Once we’re following a similar set of rules with a clear opponents like fraud and malaria, we need to find ways to pass the ball and score as a team. Could Kerry next be talking about delivering solutions integrated with African values rather than pushing distinctly American ones as preconditions?

IT is a perfect fit here because it can support peer-based coalitions of authorities to operate on infrastructure without outside controls. Imagine a network of nodes emerging in Africa the same way the Internet first evolved in America, but with modern technology that is energy efficient, mobile and wireless.

A system to detect, report and block graft on a locally derived scale instead of promoting a centralized top-down monitoring system seems unlikely to happen. Yet that could be exactly what will make America a real partner to Africa’s rise. It begs the question whether anyone has positioned to deliver NFC infrastructure for nets and vaccines while also agreeing to step back, giving shared authority and responsibility to track progress with loose federation. America could be quite a help, yet also faces quite a challenge. Will Kerry find a way for Africans to follow a path forged by America in ways he may not be able to control?

American Tipping is Rooted in Slavery

5 Feb 2021 Update (nearly 7 years already): the New York Times has published an editorial by Michelle Alexander titled “Tipping Is a Legacy of Slavery: Abolish the racist, sexist subminimum wage now.


There was no tipping in America before Civil War.

It was during the 1920s — the second rise of KKK after Civil War ended — when all anti-tipping laws were repealed in America to make way for white supremacists to promote it as somehow better for someone. Why was the KKK campaigning to legalize tipping?

A labor review of December 1937 asks a very poignant question about why upwardly mobile blacks in Washington (1909), DC (1910), Mississippi (1912), Arkansas (1913) Iowa, South Carolina and Tennessee (1915) as well as several other states would pass anti-tipping laws only to find them repealed a few years later:

In view of the fact that the anti-tipping sentiment was still strong in the early and middle twenties, one may wonder why the anti-tipping laws were repealed.

When they say anti-tipping sentiment was strong, consider that unionized waiters of New York went on strike in 1906 to refuse tips and demand a minimum wage increase from $2.00/day to $2.50.

Source: 1909 Street’s Pandex of the News, p. 316

It would seem the unions were not ordering strikes for higher tips, only for higher wages.

I should probably make a special note here that when Washington DC banned tips in 1910, basically everyone affected was African American. It clearly was about race and livable wages.

What changed in the early 1920s? The KKK returned.

Indeed, this is like asking why the south went to war to preserve slavery in view of the fact that anti-slavery sentiment was strong.

Or perhaps it would be like asking why the KKK pushed so hard for prohibition (encoded criminalization of non-whites) in view of the fact that anti-prohibition (emancipated slaves distilling bourbon) sentiment was still strong.

In other words, anti-tipping laws started in 1909 before the KKK was restarted and all were repealed after. The KKK quickly grew to millions of members under Woodrow Wilson’s administration (1912-1921) and restoring tipping culture was top on their agenda.

One almost could make the argument that anti-tipping laws were a symptom of the changes in America that really inflamed “white insecurity” fears. The late 1890s racist political campaigns took on new urgency to remove anything that could give black Americans fair wages (Update in 2020: “Wilmington’s Lie: The Murderous Coup of 1898 and the Rise of White Supremacy“).

You could say tipping is how racist Americans who lost their Civil War managed to instead legalize their persistent “only white men rule” aims for aristocratic-like power over the now-emancipated black servant class in America.

The abrupt end of slavery in America meant white supremacists shifted tactics to find ways to deny livable wages to non-whites. They also sought to under-fund or block taxation to reduce the social services (education, healthcare) for those they had just denied a livable wage. NPR brings forward several examples of sentiment from the early 1900s after Civil War ended:

…journalist John Speed writing in 1902 said “Negroes take tips, of course, one expects that of them – it is a token of their inferiority. But to give money to a white man was embarrassing to me.” Such was the furor surrounding tipping that, in 1907, Sen. Benjamin Tillman of South Carolina – a virulent segregationist whose bronze statue stands outside the statehouse in Columbia – actually made national headlines for tipping a black porter at an Omaha hotel. The porter, well aware of Tillman’s previous boast that he never “tips a nigger,” told reporters sardonically that he would have the quarter made into a watch charm. “Tillman gives Negro a Tip,” was The New York Times’ headline, under which ran a sympathetic editorial on how travelers were forced “to convert themselves into fountains playing quarters upon the circumambient Africans.”

Thus it was two basic economic tactics meant to undermine black prosperity in America that became the foundation of America’s racist and hateful tipping culture. And the KKK was so successful the practice survives to this day.

Brazil maintained slavery even longer than America. It even convinced American Civil War losers to immigrate so they could continue slavery overtly instead of tipping.

That’s the short version of history. Now for the long form…


Some in America saw the dangers of tipping early and worked to shut-down such anti-democratic culture, arguing correctly it violated core American values. There were numerous anti-tipping laws in the early 1900s. Here you can see some of the views of tipping in America at that time:

“Tipping: An American Social History of Gratuities”, by Kerry Segrave, p. 6

The KKK was reborn in 1915, however, and had massive impact on American culture pushing things like the racist prohibition laws (1920 to 1933). No surprise then, by 1926 the white supremacist lobby had repealed anti-tipping laws.

One of the most powerful arguments, seen here in a 1916 rebuke of Americans worshiping aristocracy, is how tipping culture is so obviously incompatible with democracy.

“The Itching Palm: A Study of the Habit of Tipping in America”, by William R. Scott, 1916

This clear analysis of American “tipping” is confirmed by a Paris Food History blog recollection of where and why tipping culture was invented…French aristocracy and corruption:

These are all examples of an aristocrat giving a gratification to various workers […] It is unlikely on the other hand that diners in taverns and the cheaper cabarets gave tips. This was an obligation for aristocrats – or, probably, those who wanted to emulate them. And it even extended to those in the prison which most often held them: the Bastille. […] Everywhere you eat, submit without a murmur to the tax of the tip. It is illogical, absurd, exorbitant, vexatious […] In 1856, August Luchet described the tronc in which all tips were gathered: “a cylindrical metal safe, split on top.” He also wrote that (not unexpectedly) distribution of the collected tips was not always to the advantage of the servers; some owners found various pretexts for redirecting or appropriating some portion of the funds.

New York, arguably modeled on Paris at its hey-day, has now become the epitome of the “aristocrats in a prison” culture as described above.

This was nicely documented just last year by Lynne Truss as she visited from Europe

…tips are not niceties: give a “thank you” that isn’t green and foldable and you are actively starving someone’s children. This is not only demeaning for everyone; it also makes you feel, basically, that you are being constantly taken…

She’s not wrong. The economics of tipping look to be a disaster for everyone involved. The Economic Policy Institute explains that poverty rates of tipped workers is nearly double other workers and three times more likely to be on food stamps:

Tipped workers — whose wages typically fall in the bottom quartile of all U.S. wage earners, even after accounting for tips — are a growing portion of the U.S. workforce. Employment in the full-service restaurant industry has grown over 85 percent since 1990, while overall private-sector employment grew by only 24 percent. In fact, today more than one in 10 U.S. workers is employed in the leisure and hospitality sector, making labor policies for these industries all the more central to defining typical American work life.

In other words, by tipping, you are pushing less money into a tax pool while at the same time driving the person tipped to need more tax-funded assistance. It’s quite literally a means for aristocrats to appear good in a fleeting personal moment, while actually keeping poor workers in a place they can’t escape.

This shouldn’t be news to anyone. For ten years already we’ve had quantitative data to support what all these arguments above are saying.

The data at this point shows that tipping has become the norm in America as a legalized method to keep non-whites poor in an anti-democratic push for white nationalist aristocracy, and it doesn’t really serve any other purpose.

…98 percent of [21 million Americans eating out] leave a voluntary sum of money (or tip) for the servers who waited on them. These tips, which amount to over $20 billion a year, are an important source of income for the nations’ two million waiters and waitresses. In fact, tips often represent 100 percent of servers’ takehome pay because tax withholding eats-up all of their hourly wages. The income implications of tipping make it a major concern…weak service-tipping relationship, I argued, raises serious questions about the use of tips as a measure of server performance or customer satisfaction as well as the use of tips as incentives to deliver good service….weak relationship between tips and service quality at this level of analysis undermines the use of tips as a measure of customer satisfaction and an incentive to deliver good service.

This also has been described by psychologists in similar terms:

Empirical evidence suggests that tips are hardly affected by service quality…people derive benefits from tipping including impressing others and improving their selfimage as being generous and kind…. Whether the social norm of tipping increases social welfare depends crucially on the question whether tipping increases service quality. Although service quality is generally high, which could lead us to think that tipping is the incentive that causes waiters to provide excellent service, the analysis above shows that the sensitivity of tips to service quality is so small that tipping is not likely to be the reason for the high service quality. Consequently, tipping, at least in restaurants, does not seem to improve social welfare and economic efficiency by improving service quality

Again, just to be clear we’ve seen for over a decade studies have repeatedly proven “tipping was not significantly related to servers’ or third-parties’ evaluations of the service.”

Tipping really should be treated as an insult and disrespect, when you think about what’s going on.

I remember one night on a work trip in Copenhagen I asked a waitress if I could leave a tip and she angrily replied:

I am paid a reasonable salary for my work and I am good at it. I go to school for free and get healthcare for free. This isn’t your corrupt greed-driven American system. Tips are rude.

She was right. I was caught out for being so… American, unintentionally imposing slavery culture.

There’s no actual link for tipping to better service when you look at the data so my tip in her country made no sense because they have no aristocratic aspirations or need to perpetuate an economic disparity. She wasn’t desperate for an aristocrat to keep her afloat. Since she and I were in a balanced professional engagement, an amount already was settled and handing her a few bills was an insulting gesture.

Taking pride in your work and doing a good job don’t actually come from tips, as restaurateurs explain themselves.

In any workplace, everyone is required to perform well, and tips have nothing to do with it.

We see scientists report that tipping in America is racist by design, perpetuating the aristocratic aspirations of pre-Civil War Americans, and does exactly what the KKK had hoped it would.

The data show very clearly that African Americans receive less in tips than whites, and so there is a legal argument to be made that as a protected class, African American servers are getting less for doing the same work. And therefore, the institution of tipping is inherently unfair.

It is inherently unfair because that is what it was always intended to be, if you accept the argument that it was a loophole for perpetuation of white nationalist policy.

What tips have to do with is a Civil War started by white aristocrats to perpetuate and expand human slavery. Despite losing that war, white supremacists have found numerous ways to maintain programs designed to disadvantage non-whites. That is how American tipping is rooted in slavery and should be abolished.

What Surveillance Taught Me About the NSA and Tear Gas: It’s Time to Rethink our Twitters about Nightmares

Medium read: 23.45 minutes at 1024×768

Zeynep Tufekci has tweeted a link to a journal of her thoughts on surveillance and big data.

#longreads version of my core thesis: “Is the Internet Good or Bad? Yes.” I reflect on Gezi, NSA & more.

The full title of the post is “What tear gas taught me about Twitter and the NSA: It’s time to rethink our nightmares about surveillance.”

I noticed right away she used a humble brag to describe events at a recent conference she attended:

A number of high-level staff from the data teams of the Obama and Romney campaigns were there, which meant that a lot of people who probably did not like me very much were in the room.

You hate it when high-level people do not like you…? #highlevelproblems?

She then speculates on why she probably would not be liked by such high-level people. Apparently she has publicly caricatured and dismissed their work as “richer data for the campaigns could mean poorer democracy for the rest of us”. She expects them to not like her personally for this.

I said she speculates that she is not “liked” because she does not quote anyone saying they “did not like” her. Instead she says they have publicly dismissed her dismissal of their work.

My guess is she wants us to see the others as angry or upset with her personally to set the stage for us seeing her in the hot-seat as a resistance thinker; outnumbered and disliked for being right/good, she is standing up for us against teams of bipartisan evil data scientists.

Here is how she describes meeting with the Chief scientist on Obama’s data analytics team, confronting him with a hard-hitting ethical dilemma and wanting to tell him to get off the fence and take a stand:

I asked him if what he does now — marketing politicians the way grocery stores market products on their shelves — ever worried him. It’s not about Obama or Romney, I said. This technology won’t always be used by your team. In the long run, the advantage will go to the highest bidder, the richer campaign.

He shrugged, and retreated to the most common cliche used to deflect the impact of technology: “It’s just a tool,” he said. “You can use it for good; you can use it for bad.”

“It’s just a tool.” I had heard this many times before. It contains a modicum of truth, but buries technology’s impacts on our lives, which are never neutral. Often, I asked the person who said it if they thought nuclear weapons were “just a tool.”

The data scientist appears to say a decision on whether the tool is good or bad in the future is not up to him. It’s a reasonable answer. Zeynep calls this burying the truth, because technology is never neutral.

To be honest there is a part of me tempted to agree with her here. That would be a nice, quiet end to my blog post.

But I must go on…

Unfortunately I can not stop here because she does not end her post either. Instead, she goes on to apparently contradict her own argument on tools being non-neutral…and that just happens to be the sort of thing that drives me to write a response.

The reason I would agree with her is because I often am making this argument myself. It’s great to see it made by her. Just the other day I saw someone tweet that technology can’t be evil and I had to tweet back that some technology can be labeled evil. In other words a particular technology can be defined by social convention as evil.

This is different from the argument that technology can never be neutral, but it is similar. I believe much of it is neutral in a natural state and acquires a good/bad status depending on use, but there still are cases where it is inherently evil.

The philosophical underpinning of my argument is that society can choose to label some technology as evil when they judge no possible good that can outweigh the harm. A hammer and a kitchen knife are neutral. In terms of evil, modern society is reaching the highest levels of consensus when discussing cluster-bombs, chemical weapons, land-mines and even Zeynep’s example of nuclear weapons.

My keynote presentation at the 2011 RSA Conference in London used the crossbow as an example of the problem of consensus building on evil technology. 500 years ago the introduction of a simple weapon that anyone could easily learn meant a sea change in economic and political stability: even the most skilled swordsman no longer stood a chance against an unskilled peasant who picked up a crossbow.

You might think this meant revolution was suddenly in the hands of peasants to overthrow their king and his mighty army of swordsmen. Actually, imagine the opposite. In my presentation I described swordsmen who attempted to stage a coup against their own king. A quickly assembled army of mercenary-peasants was imported and paid to mow down revolutionary swords with crossbows. The swordsmen then would petition a religious leader to outlaw crossbows as non-neutral technology, inherently evil, and restore their ability to protect themselves from the king.

The point is we can have standards, conventions or regulations, that define technology as inherently evil when enough people agree more harm then good will always be the result of use.

Is the Internet just a tool?

With that in mind, here comes the contradiction and why I have to disagree with her. Remember, above Zeynep asked a data scientist to look into the future and predict whether technology is bad or good.

She did not accept leaving this decision to someone else. She did not accept his “most common cliche used to deflect the impact of technology”. And yet she says this:

I was asked the same question over and over again: Is the internet good or bad?

It’s both, I kept saying. At the same time. In complex, new configurations.

I am tempted to use her own words in response. This “contains a modicum of truth, but buries technology’s impacts on our lives, which are never neutral.” I mean does Zeynep also think nuclear weapons are “both good and bad at the same time, in complex, new configurations”?

Deterrence was certainly an argument used in the past with exactly this sort of reasoning to justify nuclear weapons; they are bad but they are good so they really are neutral until you put them in the hands of someone.

And on and on and on…

The part of her writing I enjoy most is how she personalizes the experience of resistance and surveillance. It makes for very emotionally-charged and dramatic reading. She emphasizes how we are in danger of a Disney-esque perfect surveillance world. She tells us about people who, unable to find masks when they disagree with their government, end up puking from tear gas. Perhaps the irony between these two points is lost to her. Perhaps I am not supposed to see these as incongruous. Either way, her post is enlightening as a string of first-person observations.

The part of her writing I struggle most with a lack of political theory, let alone science. She does not touch on the essence of discord. Political science studies of violent protests around the world in the 1960s for example were keying in on the nature of change. Technology was a factor then also, and the time before and the time before, so a fundamental question is raised whether there are any lessons learned before. Maybe this is not the first time we’ve crossed this bridge.

Movements towards individualism, opportunity, creativity, and a true thinking and nourishing society appear to bring forth new technology, perhaps even more than new technology causes them. Just like the crossbow was developed to quickly reduce the ability of a swordsman to protect his interests, innovations in surveillance technology might have been developed to reduce the ability of a citizen to protect theirs. Unlike the crossbow, however, surveillance does not appear to be so clearly and consistently evil. Don’t get me wrong, more people than ever are working to classify uses of surveillance tools as evil. And some of it is very evil but not all of it.

Harder questions

Political science suggests there always is coercion in government. Most people do not mind some amount of coercion when it is exchanged for something they value. Then as this value shrinks, and progress towards a replacement value is not rapid enough, it generates friction and a return towards independence. So loss of independence theoretically can be a balance with some form of good.

It is obvious surveillance technology (e.g. Twitter) in many cases has found positive uses, such as monitoring health, natural disasters or accidents. It even can be argued political party hands have found beneficial uses for surveillance, such as fraud monitoring. The hard question is how to know when any act of surveillance, more than the latest technology, becomes evil by majority definition and what oversight is required to ensure we do not cross that point. She seems to suggest the individual is never safe:

[Companies and political parties] want us to click, willingly, on a choice that has been engineered for us. Diplomats call this soft power. It may be soft but it’s not weak. It doesn’t generate resistance, as totalitarianism does, so it’s actually stronger.

This is an oversimplified view of both company and political party relationships with individuals. Such an oversimplification makes it easy to “intertwine” concepts of rebellion and surveillance, and to reference diplomats as some sort of Machiavellian concept. The balance between state and individual is not inherently or always a form of deception to lull individuals into compliance without awareness of risks. There can actually be a neutral position, just as with technology.

What should companies and political parties offer us if not something they think we want? Should individuals be given choices that have not been engineered in any way? The act of providing a choice is often itself a form of engineering, as documented in elections with high rates of illiteracy (where candidates are “randomly” assigned an icon to represent them on ballots).

Should individuals be given a choice completely ignorant of our desires? That begs the very question of the market function and competition. It brings to mind Soviet-era systems that pretended to “ignore” desire in order to provide “neutral” choices by replacing it with centrally planned outcomes. We should think carefully about value offered to the individual by a government or a company and at what point value becomes “seduction” to maintain power through coercion.

Ultimately, despite having earlier criticized others for “retreating” to a neutral ground, her conclusion ends up in the same place:

Internet technology lets us peel away layers of divisions and distractions and interact with one another, human to human. At the same time, the powerful are looking at those very interactions, and using them to figure out how to make us more compliant.

In other words, Internet technology is neutral.

When we connect with others we may become more visible; the connection still has value when visibility is a risk. When we connect we may lose independence; the connection still has value when loss of independence is a risk.

It is disingenuous for us to label anyone that watches us as “the powerful” or to call ways that “make us more compliant” as inherently evil. Compliance can be a very good thing, obviously.

Zeynep offers us interesting documentation of first-person observations but offers little in the way of analysis and historical context. She also gives unfair treatment to basic political science issues and criticizes others before she seems to arrive at the same conclusion.

As others have said, it’s a “brilliant, profoundly disturbing piece”.