Recently I wrote here about the ill-fated American operation “IGLOOWHITE” from the Vietnam War that cost billions of dollars to try and use information gathering from many small sensors to locate enemies.
It’s in fact an old pursuit as you can see from this news image of the Japanese Emperor inspecting his big 1936 investment in anti-aircraft data collection technology.
Even earlier, Popular Science this month in 1918 published a story called “How Far Off Is That German Gun? How sixty-three German guns were located by sound waves alone in a single day.”
How Far Off Is That German Gun? How 63 German guns were located by sound waves alone in a single day, Popular Science monthly, December 1918, page 39
Somewhere in-between the Vietnam War and WWI narratives, we should expect the Defense Department to soon start exhibiting how they are using the latest location technology (artificial intelligence) to hit enemy targets.
The velocity of information between a sensor picking signs of enemy movement and the counter-attack machinery…is the stuff of constant research probably as old as war itself.
Popular Mechanics for its share also ran a cover story with acoustic locator devices, such as a pre-radar contraption that was highlighted as the future way to find airplanes.
The cover style looks to be from the 1940s although I have only found the image so far, not the exact text.
That odd-looking floral arrangement meant for war was known as a Perrin acoustic locator (named for French Nobel prizewinner Jean-Baptiste Perrin) and it used four large clusters of 36 small hexagonal horns (six groups of six).
Such a complicated setup might have seemed like an improvement to some. Here are German soldiers in 1917 using a single personal field acoustic and sight locator to enhance the “flash bang” of enemy artillery, just for comparison.
Source: “Weird War One” by Peter Taylor, published by Imperial War Museum
Obviously use of many small sensors gave way to the common big dish design we see everywhere today. Igloo White perhaps could be seen as a Perrin data locator of its day?
They are a perfect example of how simply multiplying/increasing the number of small sensors into a single processing unit is not necessarily the right approach versus designing a very large sensor fit for purpose.
…”need for speed” in the context of the well known Processing, Exploitation and Dissemination (PED) process which gathers information, distills and organizes it before sending carefully determined data to decision makers. The entire process, long underway for processing things like drone video feeds for years, has now been condensed into a matter of seconds, in part due to AI platforms like FIRESTORM. Advanced algorithms can, for instance, autonomously sort through and observe hours of live video feeds, identify moments of potential significance to human controllers and properly send or transmit the often time-sensitive information.
“In the early days we were doing PED away from the front lines, now it’s happening at the tactical edge. Now we need writers to change the algorithms,” Flynn explained.
“Three years ago it was books and think tanks talking about AI. We did it today,” said Army Secretary Ryan McCarthy.
Three years ago? Not sure why he uses that time frame. FIRESTORM promises to be an interesting new twist on IGLOOWHITE from around 50 years ago, and we would be wise to heed the “fire, ready, aim” severe mistakes made.
The US Air Force (USAF) at the end of 1967 started to air-drop around 20,000 micro sensors into a country bordering Vietnam to be monitored by an IBM mainframe, in order to help direct US airstrikes. The project was an expensive disaster that became a foundation for US domestic military surveillance of non-whites.
It had little impact (e.g. “sensors couldn’t tell the difference between a gun and a shovel”) while costing American lives. All it did prove was the fact that drones flying above a mesh of sensors could launch airstrikes on a moment’s notice…for a low low price of just $1 billion/year in the 1970s, as the following documentary puts plainly:
When you stop to think about it if you have $30M orbiting reconnaissance aircraft to transmit signals, and $20M command post to call in four $10M fighters to assault a convoy of five $5000 trucks with $2000 worth of rice, it’s easy to see that’s not cost-effective. This is a self-inflicted wound… a losing proposition…
Initial Plans
The North Vietnamese had built a network of roads through neighboring neutral countries Laos and Cambodia to supply forces in South Vietnam. This “Truong Son Road” (called “Ho Chi Minh Trail” by Americans) was concealed by the natural foliage of thick jungle.
Plans were concocted by Americans to appear respectful of Laos and Cambodia, while still bombing them, by secretly dropping hidden sensors that would guide targeted strikes and Army Special Forces teams “over the fence”
The idea of constructing an anti-infiltration barrier across the DMZ and the Laotian panhandle was first proposed in January 1966 by Roger Fisher of Harvard Law School in one of his periodic memos to McNaughton.
A book called The Closed World explains in detail what these Harvard Law School plans turned into:
There were several iterations of the sensors. The USAF archives refer to these categories:
ADSID I and III, (Normal) and (Short): (Air Delivered Seismic Intrusion Detector) – transmitted vibration from geophone (personnel or vehicles in motion)
ACOUSID II and III: (Acoustic and Seismic Intrusion Detector) – transmitted sound from microphone
We’re talking here about $2K radios inside a dart-shaped canister with a 2 week battery (later expanded to 45 days by changing from continuous to polling), and a 20% failure rate on deployment.
ACOUSID III Cutaway (USAF Drawing)
Ten years ago Air Force Magazine described the wide set of problems with false positives from these wireless sensors in a jungle. This honest analysis is a far cry from how the USAF originally fluffed up the technology to be as easy as “drugstore pinball” and give North Vietnamese “nowhere to hide”:
The challenge for the seismic sensors (and for the analysts) was not so much in detecting the people and the trucks as it was in separating out the false alarms generated by wind, thunder, rain, earth tremors, and animals—especially frogs.
There were other kinds of sensors as well. One of them was the “people sniffer,” which chemically sensed sweat and urine.
[…]
“We wire the Ho Chi Minh Trail like a drugstore pinball machine, and we plug it in every night,” an Air Force officer told Armed Forces Journal in 1971. “Before, the enemy had two things going for him. The sun went down every night, and he had trees to hide under. Now he has nothing.”
Here are the sort of acoustic details captured in working group studies hoping to isolate signals of frogs and shovels from soldiers and trucks:
Figure and Table from “Acoustical Working Group: Acquisition, Reduction and Analysis of Acoustical Data. An Unclassified Summary of Acoustical Working Group Studies.” NADC Report No. AWG-SU, 1974
Sensor Deployment
Either a F-4 Phantom jet, a OV-10 Bronco plane, or a CH-3 Jolly Green Giant helicopter was used for air drops. Given the large quantity of sensors, frequency of drops, size of budget and talent of engineering, their placement wasn’t as sophisticated as one might imagine.
Here you can see a member of 21st Special Operations Squad (SOS) based in Nakhon Phanom (under the Dust Devils call sign) at low altitude sending a sensor by hand.
Initially MC-130E Blackbird were used to orbit and monitor the sensors. The 554th Reconnaissance Squad (under the Vampires call sign) by 1970 started orbiting a QU-22B “drone” to pick up signals from the sensors and relay them back to Infiltration Surveillance Center (ISC) at the Nakhon Phanom Royal Thai Air Force Base.
Despite being engineered with complex electrical equipment to enable remote control. reliability failures meant every flight carried a pilot on board (A QU-22 reunion site interviews them).
The high-tech QU-22 drone program was cancelled after just two years with a number of crashes including two inside Laos.
Command Center
Back at the ISC, computers made by IBM were connected to a giant wall-sized display of the area under surveillance, as well as touchscreen monitors (images from US Air Force Historical Research Agency):
Military Surveillance “Toys” Deployed in America
Despite President Nixon’s backing, the expense of Igloo White coupled with many American casualties sat on top of a failure to produce results to justify continuing the program, especially after North Vietnamese simple changed tactics. The program was cancelled by 1973 just as Nixon was infamously announcing he would criminalize being non-white.
Nixon had believed so strongly in the new surveillance technology that he had the same sensors deployed to his lawns and…of course the border with Mexico.
“Bringing the toys home from Vietnam” New Scientist 15 Jun 1972
That kind of outcome didn’t seem to dissuade some from thinking there is a bright future for military surveillance technology along America’s borders.
In 1989 the Air War College reported that military surveillance failure under Nixon on the border with Mexico meant President Reagan actually had a useful foundation for military role in the criminalization of non-whites.
Reagan pushed so hard on invasive of domestic military surveillance the Posse Comitatus Act of 1878 was modified “to allow all branches of the Armed Forces to provide equipment, training, and assistance to the U.S. Coast Guard, U.S. Customs, and to other Drug Enforcement agencies”. Today it is widely known and becoming uncontroversial that the “war on drugs” was intentionally racist — criminalizing non-whites.
Conclusion
The true lesson from Igloo White was that an expensive technological military replacement (even domestically) for human intelligence gathering systems may have been very fast yet also very expensive and never really proven accurate. It will forever be known in history as a “self-inflicted wound” by Nixon that Reagan doubled-down on.
Air Force Magazine, while admitting the USAF vastly overstated the success of their work, also emphasized analysis of data can be mishandled by everyone involved:
…7th Air Force’s “numbers game” was refuted by the CIA’s own “highly reliable sources,” referring to its agents in the enemy ranks. The CIA and the Defense Intelligence Agency developed a formula that arbitrarily discounted 75 percent of the pilot claims. […] Then, as now, the bomb damage assessment process was flawed on both ends: Operations tended to claim too much; Intelligence tended to validate too little.
For a perspective from the Laotian side, see “The Rocket”:
Update April 2021: Jim Bolen (former Special Forces operative in Laos and Cambodia on SOG operations to the Ho Chi Minh and Sihanouk Trails during the Vietnam War — recon team leader on over 40 SOG missions and extracted under fire from over 30) gives a new interview where he describes “Reconnaissance Missions to the Ho Chi Minh Trail” and argues that electronic sensors were infallible technology and instead it was LBJ who wasn’t listening.
…we had planted electronic seismic sensors along the Trail coming out of North Vietnam. These sensors were monitored 24 hours a day 365 days a year by C-130 Blackbirds. The seismic sensors would pick up vibrations from truck or tank movements along the Trail.
Sensors dropped along the Ho Chi Minh Trail in Attapeu Laos. Source: Military Assistance Command, Vietnam – Studies and Observations Group
Update October 2021: Declassification of secret missions gives us the opposite of Jim Bolen, “MACV-SOG: A Conversation with John Stryker Meyer“, which includes a gold nugget of modest wisdom from a Green Beret who served on the ground at the time that the bombing campaigns in Laos were “completely useless“.
For several years I have tried to speak openly about why I find it disappointing that analysts rely heavily (sometimes exclusively) on language to determine who is a foreigner.
They are making some funny and highly improbable assumptions: … The attackers used Chinese language attack tools, therefore they must be Chinese. This is a reverse language bias that brings back memories of L0phtCrack. It only ran in English.
Here’s the sort of information I have presented most recently for people to consider:
You see above the analysts tell a reporter that presence of a Chinese language pack is the clue to Chinese design and operation of attacks on Russia. Then further investigation revealed the source actually was Korea. Major error, no? It seems to be reported as only an “oops” instead of a WTF.
At a recent digital forensics and incident response (DFIR) meeting I pointed out that the switch from Chinese to Korean origin of attacks on Russia of course was a huge shift in attribution, one with potential connections to the US.
This did not sit well with at least one researcher in the audience. “What proof do you have there are any connections from Korea to the US” they yelled out. I assumed they were facetiously trying to see if I had evidence of an English language pack to prove my point.
In retrospect they may actually have been seriously asking me to offer clues why Korean systems attacking Russia might be linked to America. I regret not taking the time to explain what clues more significant than a language pack tend to look like. Cue old history lesson slides…but I digress.
A traitorous Confederate flag flies from an American M4 (A3E8?) in the “Forgotten War“
Here’s another slide from the same talk I gave about attribution and language. I point to census data with the number and location of Chinese speakers in America, and most popular languages used on the Internet.
Unlike McAfee, mentioned above, FireEye and Mandiant have continued to ignore the obvious and point to Chinese language as proof of someone being foreign.
Consider for a moment that the infamous APT1 report suggests that language proves nothing at all. Here is page 5:
Unit 61398 requires its personnel to be…proficient in the English language
Thus proving APT1 are English-speaking and therefore not foreigners? No, wait, I mean proving that APT1 are very dangerous because you can never trust anyone required to be proficient in English.
But seriously, Mandiant sets this out presumably to establish two things.
First, “requires to be proficient” is a subtle way to say Chinese never will do better than “proficient” (non-native) because, foreigners.
Second, the Chinese target English-speaking victims (“Only two victims appear to operate using a language other than English…we believe that the two non-English speaking victims are anomalies”). Why else would the Chinese learn English except to be extremely targeted in their attacks — narrowing their focus to basically everywhere people speak English. Extremely targeted.
And then on page 6 of APT1 we see supposed proof from Mandiant of something else very important. Use of a Chinese keyboard layout:
…the APT1 operator’s keyboard layout setting was “Chinese (Simplified) – US Keyboard”
On page 41 (suspense!) they explain why this matters so much:
…Simplified Chinese keyboard layout settings on APT1’s attack systems, betrays the true location and language of the operators
Mandiant gets so confident in where someone is from based on assessing language they even try to convince the reader that Americans do not make grammar errors. Errors in English (failed attempts at proficiency) prove they are dealing with a foreigner.
Their own digital weapons betray the fact that they were programmed by people whose first language is not English. Here are some examples of grammatically incorrect phrases that have made it into APT1’s tools
It is hard to believe this is not meant as a joke. There is a complete lack of linguistic analysis, for example, just a strange assertion about proficiency. In our 2010 RSAC presentation on the linguistics of threats we give analysis of phrases and show how syntax and spellings can be useful to understand origins. I can only imagine what people would have said if we tried to argue “Bad Grammar Means English Ain’t Your First Language”.
Of course I am not saying Mandiant or others are wrong to have suspicion of Chinese connections when they find some Chinese language. Despite analysts wearing clothes with Chinese language tags and using computers that probably have Chinese language print there may be some actual connections worth investigating further.
My point is that the analysis offered to support conclusions has been incredibly weak, almost to the point of being a huge distraction from the quality in the rest of the reports. It makes serious work look absurd when someone over-emphasizes language spoken as proof of geographic location.
Now, in some strange twist of “I told you so”, the Twittersphere has come alive with condemnation of an NSA analyst for relying to heavily on language.
Thank you to Chris and Halvar and everyone else for pointing out how awful it is when the NSA does this kind of thinking; please also notice how often it happens elsewhere.
More people need to speak out against this generally in the security community on a more regular basis. It really is far too common in far too many threat reports to be treated as unique or surprising when the NSA does it, no?
The Congo had 20 million people in 1885. Belgian King Leopold II then colonized it as his private white police state, which tortured and killed up to 10 million people.
Full disclosure: I spent my undergraduate and graduate degree time researching the ethics of intervention with a focus on the Horn of Africa. One of the most difficult questions to answer was how to define colonialism. Take Ethiopia, for example. It was never colonized and yet the British invaded, occupied and controlled it from 1940-1943 (the topic of my MSc thesis at LSE).
I’m not saying I am an expert on colonialism. I’m saying after many years of research including spending a year reading original papers from the 1940s British Colonial office and meeting with ex-colonial officers, I have a really good sense of how hard it is to become an expert on colonialism.
Since then, every so often, I hear someone in the tech community coming up with a theory about colonialism. I do my best to dissuade them from going down that path. Here came another opportunity on Twitter from Zooko:
This short post instantly changed my beliefs about global development. “The Dawn of Cyber-Colonialism” by @GDanezis
If nothing else, I would like to encourage Zooko and the author of “dawn of Cyber-Colonialism” to back away from simplistic invocations of colonialism and choose a different discourse to make their point.
Maybe I should start by pointing out an irony often found in the anti-colonial argument. The usual worry about “are we headed towards colonialism” is tied to some rather unrealistic assumptions. It is like a thinly-veiled way for someone to think out loud: “our technology is so superior to these poor savage countries, and they have no hope without us, we must be careful to not colonize them with it”.
A lack of self-awareness in commercial views is an ancient factor. John Stuart Mill, for example in the 1860s, used to opine that only through a commercial influence would any individual realize true freedom and self-governance; yet he feared colonialists could spoil everything through not restraining or developing beyond their own self-interests. His worry was specifically that colonizers did not understand local needs, did not have sympathy, did not remain impartial in questions of justice, and would always think of their own profits before development. (Considerations on Representative Government)
I will leave the irony of the colonialists’ colonialism lament at this point, rather than digging into what motivates someone’s concern about those “less-developed” people and how the “most-fortunate” will define best interests of the “less-fortunate”.
People tend to get offended when you point out they may be the ones with colonialist bias and tendencies, rather than those they aim to criticize for being engaged in an unsavory form of commerce. So rather than delve into the odd assumptions taken among those who worry, instead I will explore the framework and term of “colonialism” itself.
Everyone today hates, or should hate the core concepts of colonialism because the concept has been boiled down so much to be little more than an evil relic of history.
A tempting technique in discourse is to create a negative association. Want people to dislike something? Just broadly call it something they already should dislike, such as colonialism. Yuck. Cyber-colonialism, future yuck.
However, using an association to colonialism actually is not as easy as one might think. A simplified definition of colonialism tends to be quite hard to get anyone to agree upon. The subjugation of a group by another group through integrated domination might be a good way to start the definition. And just look at all the big words in that sentence.
More than occupation, more than unfair control or any deals gone wrong, colonialism is tricky to pin down because of elements of what is known as “colonus” and measuring success as agrarian rather than a nomad.
Perhaps a reverse view helps clarify. Eve Tuck wrote in “Decolonization is Not a Metaphor” that restoration from colonization means being made whole (restoration of ownership and control).
Decolonization brings about the repatriation of Indigenous land and life; it is not a metaphor for other things we want to do to improve our societies and schools.
The exit-barrier to colonialism is not just a simple change to political and economic controls, and it’s not a competitive gain, it’s undoing systemic wrongs to make things right.
After George Zimmerman unjustly murdered Trayvon Martin — illegally stole a man’s life and didn’t pay for it — the #blacklivesmatter movement was making the obvious case for black lives to be valued. Anyone arguing against such a movement that values human life, or trying to distract from it with whataboutism (trying to refocus on lives that are not black), perpetuates an unjust devaluation (illegal theft, immoral end) to black life.
Successful colonies thus can be characterized by an active infiltration by people who settle in with persistent integration to displace and deprive control; anyone they find is targeted in order to “gain” (steal) from their acquired assets. Women are raped, children are abused, men are tortured… all the while being told if they say they ask for equality, let alone reparations for loss, they are being greedy and will be murdered (e.g. lynched by the KKK).
It is an act of violent equity misdirection and permanent displacement coupled with active and forced reprogramming to accept severe and perpetual loss of rights as some kind of new norm (e.g. prison or labor camp). Early explorations of selfish corporations for profit gave little or nothing in return for their thefts, whenever they could find a powerful loophole like colonialism that unfairly extracted value from human life.
Removing something colonus, therefore, is unlike removing elements performing routine work along commercial lines. Even if you fire the bad workers, or remove toxic leadership, the effects of deep colonialism are very likely to remain. Instead, removal means to untangle and reverse steps that had created output under an unjust commercially-driven “civilization”; equity has to flow back to places forced to accept they would never be given any realization or control of their own value.
That is why something like de-occupation is comparatively easy. Even redirecting control, or cancelling a deal or a contract, is easy compared to de-colonization.
De-colonization is very hard.
If I must put it in terms of IT, hardware that actively tried to take control of my daily life and integrate into my processes that I have while reducing my control of direction is what we’re talking about. Not just a bad chip that is patched or replaced, it is an entire business process attack that requires deep rethinking of how gains/losses are calculated.
It would be like someone infecting our storage devices with bitcoin mining code or artificial intelligence (i.e. chatbot, personal assistant) that not only drive profits but also are used to permanently settle in our environment and prevent us from having a final say about our own destiny. It’s a form of blackmail, of having your own digital life ransomed to you.
Reformulating business processes is very messy, and far worse than fixing bugs.
My study in undergraduate and graduate school really tried to make sense of the end of colonialism and the role of foreign influence in national liberation movements through the 1960s.
This was not a study of available patching mechanisms or finding a new source of materials. I never found, not even in the extensive work of European philosophers, a simple way to describe the very many facets of danger from always uninvited (or even sometimes invited) selfish guests who were able to invade and then completely run large complex organizations. Once inside, once infiltrated, the system has to reject the thing it somehow became convinced it chose to be its leader.
Perhaps now you can see the trouble with colonialism definitions.
Now take a look at this odd paraphrase of the Oxford Dictionary (presumably because the author is from the UK), used to setup the blog post called “The dawn of Cyber-Colonialism“:
The policy or practice of acquiring full or partial political control over another country’s cyber-space, occupying it with technologies or components serving foreign interests, and exploiting it economically.
Pardon my French but this is complete bullshit. Such a definition at face value is far too broad to be useful. Partial control over another country by occupying it with stuff to serve foreign interest and exploiting it sounds like what most would call imperialism at worst, commerce at best. I mean nothing in that definition says “another country” is harmed. Harm seems essential. Subjugation is harmful. That definition also doesn’t say anything about being opposed to control or occupation, let alone exploitation.
I’m not going to blow apart the definition bit-by-bit as much as I am tempted. It fails across multiple levels and I would love to destroy each.
Instead I will just point out that such a horrible definition would result in Ethiopia having to say it was colonized because of British 1940 intervention to remove Axis invaders and put Haile Selassie back into power. Simple test. That definition fails.
Let me cut right to the chase. As I mentioned at the start, those arguing that we are entering an era of cyber-colonialism should think carefully whether they really want to wade into the mess of defining colonialism. I advise everyone to steer clear and choose other pejorative and scary language to make a point.
Actually, I encourage them to tell us how and why technology commerce is bad in precise technical details. It seems lazy for people to build false connections and use association games to create negative feeling and resentment instead of being detailed and transparent in their research and writing.
On that note, I also want to comment on some of the technical points found in the blog claiming to see a dawn of colonialism:
What is truly at stake is whether a small number of technologically-advanced countries, including the US and the UK, but also others with a domestic technology industry, should be in a position to absolutely dominate the “cyber-space” of smaller nations.
I agree in general there is a concern with dominance, but this representation is far too simplistic. It assumes the playing field is made up of countries (presumably UK is mentioned because the blog author is from the UK), rather than what really is a mix of many associations, groups and power brokers. Google, for example, was famous in 2011 for boasting it had no need for any government to exist anymore. This widely discussed power hubris directly contradicts any thesis that subjugation or domination come purely from the state apparatus.
Consider a small number of technologically-advanced companies. Google and Amazon are in a position to absolutely dominate the cyber-space of smaller nations. This would seem as legitimate a concern as past imperialist actions. We could see the term “Banana Republic” replaced as countries become a “Search Republic”.
It’s a relationship fairly easy to contemplate because we already see evidence of it. Google’s chairman told the press he was proud of “Search Republic” policies and completely self-interested commerce (the kind Mill warned about in 1861): he said “It’s called capitalism”
Given the mounting evidence of commercial and political threat to nations from Google, what does cyber-colonialism really look like in the near, or even far-off, future?
Back to the blog claiming to see a dawn of colonialism, here’s a contentious prediction of what cyber-colonialism will look like:
If the manager decides to go with modern internationally sourced computerized system, it is impossible to guarantee that they will operate against the will of the source nation. The manufactured low security standards (or deliberate back doors) pretty much guarantee that the signaling system will be susceptible to hacking, ultimately placing it under the control of technologically advanced nations. In brief, this choice is equivalent to surrendering the control of this critical infrastructure, on which both the economic well-being of the nation and its military capacity relies, to foreign power(s).
The blog author, George Danezis, apparently has no experience with managing risk in critical infrastructure or with auditing critical infrastructure operations so I’ll try to put this in a more tangible and real context:
Recently on a job in Alaska I was riding a state-of-the art train. It had enough power in one engine to run an entire American city. Perhaps I will post photos here, because the conductor opened the control panels and let me see all of the great improvements in rail technology.
The reason he could let me in and show me everything was because the entire critical infrastructure was shutdown. I was told this happened often. As the central switching system had a glitch, which was more often than you might imagine, all the trains everywhere were stopped. After touring the engine, I stepped off the train and up into a diesel truck driven by a rail mechanic. His beard was as long as a summer day in Anchorage and he assured me trains have to be stopped due to computer failure all the time.
I was driven back to my hotel because no trains would run again until the next day. No trains. In all of Alaska. America. So while we opine about colonial exploitation of trains, let’s talk about real reliability issues today and how chips with backdoors really stack up. Someone sitting at the keyboard can worry about resilience of modern chips all they want but it needs to be linked to experience with “modern internationally sourced computerized system” used to run critical infrastructure. I have audited critical infrastructure environments since 1997 and let me tell you they have a very unique and particular risk management model that would probably surprise most people on the outside.
Risk is something rarely understood from an outside perspective unless time is taken to explore actual faults in a big picture environments and the study of actual events happening now and in the past. In other words you can’t do a very good job auditing without spending time doing the audit, on the inside.
A manager going with a modern internationally sourced computerized system is (a) subject to a wide spectrum of factors of great significance (e.g. dust, profit, heat, water, parts availability, supply chains), and (b) worried about presence of backdoors for the opposite reason you might think ; they represent hope for support and help during critical failures. I’ll say it again, they WANT backdoors.
It reminds me of a major backdoor into a huge international technology company’s flagship product. The door suggested potential for access to sensitive information. I found it, I reported it. Instead of alarm by this company I was repeatedly assured I had stumbled upon a “service” highly desirable to customers who did not have the resources or want to troubleshoot critical failures. I couldn’t believe it. But as the saying goes: one person’s bug is another person’s feature.
To make this absolutely clear, there is a book called “Back Door Java” by Newberry that I highly recommend people read if they think computer chips might be riddled with backdoors. It details how the culture of Indonesia celebrates the backdoor as an integral element of progress and resilience in daily lives.
Cooking and gossip are done through a network of access to everyone’s kitchen, in the back of a house, connected by alley. Service is done through back, not front, paths of shared interests.
This is not that peculiar when you think about American businesses that hide critical services in alleys and loading docks away from their main entrances. A hotel guest in America might say they don’t want any backdoors until they realize they won’t be getting clean sheets or even soap and toilet-paper. The backdoor is not inherently evil and may actually be essential. The question is whether abuse can be detected or prevented.
Dominance and control is quite complex when you really look at the relationships of groups and individuals engaged in access paths that are overt and covert.
So back to the paragraph we started with, I would say a manager is not surrendering control in the way some might think when access is granted, even if access is greater than what was initially negotiated or openly/outwardly recognized.
Not opting for computerized technologies is also a difficult choice to make, akin to not having a mobile phone in the 21st century. First, it is increasingly difficult to source older hardware, and the low demand increases its cost. Without computers and modern network communications is it also impossible to benefit from their productivity benefits. This in turn reduces the competitiveness of the small nation infrastructure in an international market; freight and passengers are likely to choose other means of transport, and shareholders will disinvest. The financial times will write about “low productivity of labor” and a few years down the line a new manager will be appointed to select option 1, against a backdrop of an IMF rescue package.
That paragraph has an obvious false choice fallacy. The opposite of granting access (prior paragraph) would be not granting access. Instead we’re being asked in this paragraph to believe the only other choice is lack of technology.
Does anyone believe it increasingly is difficult to source older hardware? We are given no reason. I’ll give you two reasons how old hardware could be increasingly easy to source: reduced friction and increased privacy.
About 20% of people keep their old device because it’s easier than selling it. Another 20% keep their device because privacy concerns. That’s 40% of old hardware sitting and ready to be used, if only we could erase the data securely and make it easy to exchange for money. SellCell.com (trying to solve one of the problems) claims the source of older cellphone hardware in America alone now is about $47billion worth.
And who believes that low demand increases cost? What kind of economic theory is this?
Scarcity increases cost, but we do not have evidence of scarcity. We have the opposite. For example, there is no demand for the box of Blackberry phones sitting on my desk.
Are you willing to pay me more for a Blackberry because low demand?
Even more suspect is a statement that without computers and modern network communications it is impossible for a country to benefit. Having given us a false choice fallacy (either have the latest technology or nothing at all) everyone in the world who doesn’t buy technology is doomed to fail and devalue their economy?
Apply this to ANY environment and it should be abundantly clear why this is not the way the world works. New technology is embraced slowly, cautiously (relative terms) versus known good technology that has proven itself useful. Technology is bought over time with varying degrees of being “advanced”.
To further complicate the choice, some supply chains have a really long tail due to the nature of a device achieving a timeless status and generating localized innovation with endless supplies (e.g. the infamous AK-47, classic cars).
To make this point clearer, just tour the effects of telecommunications providers in countries like South Africa, Brazil, India, Mexico, Kenya and Pakistan. I’ve written about this before on many levels and visited some of them.
I would not say it is the latest or greatest tech, but tech available, which builds economies by enabling disenfranchised groups to create commerce and increase wealth. When a customer tells me they can only get 28.8K modem speeds I do not laugh at them or pity them. I look for solutions that integrate with slow links for incremental gains in resilience, transparency and privacy. When I’m told 250ms latency is a norm it’s the same thing, I’m building solutions to integrate and provide incremental gains. It’s never all-or-nothing.
A micro-loan robot in India that goes into rough neighborhoods to dispense cash, for example, is a new concept based on relatively simple supplies that has a dramatic impact. Groups in a Kenyan village share a single cell-phone and manage it similarly to the old British phone booth. There are so many more examples, none of which break down in simple terms of the amazing US government versus technologically-poor countries left vulnerable.
And back to the blog paragraph we started with, my guess is the Financial Times will write about “productivity of labor” if we focus on real risk, and a few years down the line new managers will be emerging in more places than ever.
Maintaining the ability of western signals intelligence agencies to perform foreign pervasive surveillance, requires total control over other nations’ technology, not just the content of their communication. This is the context of the rise of design backdoors, hardware trojans, and tailored access operations.
I don’t know why we should believe anything in this paragraph. Total control of technology is not necessary to maintain the ability of intelligence. That defies common sense. Total control is not necessary to have intelligence be highly effective, nor does it mean intelligence will be better than having partial or incomplete control (as explained best by David Hume).
My guess is that paragraph was written with those terms because they have a particular ring to them, meant to evoke a reaction rather than explain a reality or demonstrate proof.
Total control sounds bad. Foreign pervasive surveillance sounds bad. Design backdoors, Trojan horses and tailored access (opposite of total control) sound bad. It all sounds so scary and bad, we should worry about them.
But back to the point, even if we worry because such scary words are being thrown at us about how technology may be tangled into a web of international commerce and political purpose, nothing in that blog on “cyber-colonialism” really comes even close to qualify as colonialism.
a blog about the poetry of information security, since 1995