Automobile Control System “Eliminates” the Driver

All day today on Twitter I was twittering, tweeting, sending twats about the lack of historic perspective in the auto-auto, self-driving, driver-elimination…whatever you want to call this long-time coming vehicle drone industry.

Here is a good example:

Not many faves for my tweet of the old RCA radio controlled drone concept
Not many faves for my tweet of the old RCA radio controlled drone concept

“Control system” was all the rage for terminology in the 1960s, I guess. Must have sounded better back then than cybernetics, which was coined in the 1940s to mean control system.

Consistent terminology is hard. Marketing old ideas is too. No one would say control system today, would they? They certainly wouldn’t say auto-auto.

The words automobile and automotive already have the word auto built-in. Seems tragic we have forgotten why we put “auto” there in the first place. Auto mobile, not so auto anymore?

An old racing friend called this becoming “velocitized”. After you get used to things at a certain speed or way you lose touch. So the word auto is no longer impressive. We need to speed up again or we will start to feel like we’re standing still — need more auto.

And so the “auto” industry wants to become automated again but that sounds like failure so let’s come up with a new phrase. Sure history is being obscured but that’s the very problem of people becoming velocitized over time, so someone came up with “self-driving” to get our attention again.

Self-driving, aside from being disconnected from auto roots, sounds slightly sour and selfish because if it’s driving you or others around it really isn’t just “self” driving is it? What will conjugations of drone be for future generations? He, she, self-driving? Here comes the self-driving car. There goes the…not-just-self-driving car?

Sometimes I get stuck on stupid rule jokes, I know. Anyway the General Motors of 1956 offered the world “a future vision of driverless cars”

They called it the “far-off future of 1976”. No joke. Driverless cars by 1976. Crazy to think about that timeline today. Twenty years is all GM thought they’d need to get cars driving people around without a driver. No wait, I mean driving others around while driving themselves.

This wasn’t a drop in the pan idea. Within just a couple years RCA was in on the future vision promoting a wireless system for coordinating drones. The NYT front page of June 6, 1960 headline read:

Automobile Control System Eliminates the Driver

And it went so far as to back-date the research 7 years promising full use by the far off future of…wait for it…1975!

FRUIT OF 7 YEARS’ STUDY R.C.A. and G.M. Jointly Conducted It — Full Use Seen 15 Years Away

1960 NYT Cover Story on Driverless Cars

I want you to think carefully about a headline in 1960 that says robotic machines will “eliminate” humans. Hold that thought.

Going back 7 years would be 1953, which sounds like it would be the GM Firebird rocket-car concept with automobile-control-towers to avoid rocket collisions on roads. Thank goodness for humans in those control-towers.

1954 GM Firebird
1954 GM Firebird

By 1964 the idea of automation seemed to still be alive. GM’s Stiletto concept had rear-view cameras and ultra-sonic obstacle sensors. Surely those were a mere stepping-stone away from full drone. Or was there a slide backwards towards keeping human judgment in the mix?

1964 GM Stiletto dashboard
1964 GM Stiletto dashboard

Take a guess at what happened in the intervening years that might have changed the messaging.

If you said “Cuban Missile Crisis” you win a vehicle…that eliminates humans.

Robert McNamara, who sat at the US Cabinet Level during the crisis, said this about automation:

“Kennedy was rational. Khrushchev was rational. Castro was rational” and yet they were on a path that would push the world nearly to annihilation

McNamara then wisely predicted “rationality will not save us”.

Odd thing about that guy McNamara, he was a top executive at Ford motor company before he joined the Kennedy Presidential Cabinet.

Perhaps it now is easier to see when and why views on automobile automation shifted. Instead of full-speed ahead to 1974, as predicted, by 1968 you had popular culture generating future visions of 2001 where a self-driving space ship attempts to “eliminate” its human passengers.

Spoiler alert: Hal took the term “self-driving” too literally.

Moral of this post (and history) is don’t trust automation and choose your automation code words carefully. Beware especially the engineers who re-brand mistakes as being “too perfect” or completely rational, as if they don’t know who McNamara is or what he taught us. Because if you forget history you might be condemned to automate it.

The Beginning Wasn’t Full-Disclosure

An interesting personal account of vulnerability disclosure called “In the Beginning There was Full Disclosure” makes broad statements about the past.

In the beginning there was full disclosure, and there was only full disclosure, and we liked it.

I don’t know about you, but immediately my brain starts searching for a date. What year was this beginning?

No dates are given, only clues.

First clue, a reference to RFP.

So a guy named Rain Forest Puppy published the first Full Disclosure Policy promising to release vulnerabilities to vendors privately first but only so long as the vendors promised to fix things in a timely manner.

There may be earlier versions. The RFP document doesn’t have a date on it, but links suggest 2001. Lack of date seems a bit strange for a policy. I’ll settle on 2001 until another year pops up somewhere.

Second clue, vendors, meaning Microsoft

But vendors didn’t like this one bit and so Microsoft developed a policy on their own and called it Coordinated Disclosure.

This must have been after the Gates’ memo of 2002.

Both clues say the beginning was around 2000. That seems odd because software-based updates in computers trace back to 1968.

It also is odd to say the beginning was a Microsoft policy called Coordinated Disclosure. Microsoft says they released that in 2010.

Never mind 2010. Responsible disclosure was the first policy/concept at Microsoft because right after the Gates’ memo on security they mention it in 2003, discussing how Tavis Ormandy decided unilaterally to release a 0day on XP.

Thus all of the signals, as I dug through the remainder of the post, suggest vulnerability research beginning around 15 years ago. To be fair, the author gives a couple earlier references:

…a debate that has been raging in security circles for over a hundred years starting way back in the 1890s with the release of locksmithing information. An organization I was involved with, L0pht Heavy Industries, raised the debate again in the 1990’s as security researchers started finding vulnerabilities in products.

Yet these are too short a history (1890s wasn’t the first release of locksmith secrets) and not independent (L0pht takes credit for raising the debate around them) for my tastes.

Locksmith secrets are thousands of years old. Their disclosure follows. Pin-tumblers get called Egyptian locks because that’s where they are said to have originated; technically the Egyptians likely copied them out of Mesopotamia (today Iraq). Who believes Mesopotamia was unhappy their lock vulnerabilities were known? And that’s really only a tip of the iceberg for thousands of years disclosure history.

I hear L0pht taking credit again. Fair point. They raised a lot of awareness while many of us were locked in dungeons. They certainly marketed themselves well in the 1990s. No question there. Yet were they raising the debate or joining one already in progress?

To me the modern distributed systems debate raged much, much earlier. The 1968 Carterfone case, for example, ignited a whole generation seeking boundaries for “any lawful device” on public communication lines.

In 1992 Wietse Venema appeared quite adamant about the value of full disclosure, as if trying to argue it needs to happen. By 1993 he and Dan Farmer published the controversial paper “Improving the security of your site by breaking into it“.

They announced a vulnerability scanner that would be made public. It was the first of its kind. For me this was a turning point in the industry, trying to justify visibility in a formal paper and force open discussion of risk within an environment that mostly had preferred secret fixes. The public Emergency Response and Incident Advisory concepts still meant working with vendors on disclosure, which I will get to in a minute.

As a side-note the ISS founder claims to have written an earlier version of the same vulnerability scanner. Although possible, so far I have found nothing outside his own claims to back this up. SATAN has free and far wider recognition (i.e. USENIX paper) and also easily was found running in the early 1990s. I remember when ISS first announced in the mid 1990s, it appeared to be a commercial version of SATAN that did not even try to distinguish or back-date itself.

But I digress. Disclosure of vulnerabilities in 1992 felt very controversial. Those I found were very hush and the steeped ethical discussions of exposing weakness were clearly captured in Venema/Farmer paper. There definitely was still secrecy and not yet a full-disclosure climate.

Just to confirm I am not losing my memory, I ran a few searches on an old vulnerability disclosure list, the CIAC. Sure enough, right away I noticed secretive examples. January 4, 1990 Texas Instruments D3 Process Control System gives no details, only:

TI Vuln Disclosure

Also in January 1990, Apple has the same type of vulnerability notice.

Even more to the point, and speaking of SATAN, I also noticed HP using a pre-release notice. This confirms for me my memory isn’t far off; full disclosure was not a norm. HP issues a notice before the researcher made the vulnerabilities public.

HP SATAN

Vendors shifted how they respond not because a researcher released a vulnerability under pride of full disclosure, which a vendor had powerful legal and technical tools to dispute. Rather SATAN changed the economics of disclosure by making the discussion with a vendor about self-protection through awareness first-person and free.

Anyone could generate a new report, anywhere, anytime so the major vendors had to contemplate the value of responding to an overall “assessment” relative to other vendors.

Anyway, great thoughts on disclosure from the other blog, despite the difference on when and how our practices started. I am ancient in Internet years and perhaps more prone than most to dispute historic facts. Thus I encourage everyone to search early disclosures for further perspective on a “Beginning” and how things used to run.

Updates:

@ErrataRob points out SATAN was automating what CERT had already outed, the BUGTRAQ mailing list (started in 1993) was meant to crowd-source disclosures after CERT wasn’t doing it very well. Before CERT people traded vulns for a long time in secret. CERT made it harder, but it was BUGTRAQ that really shutdown trading because it was so easy to report.

@4Dgifts points out discussion of vulns on comp.unix.security USENET news started around 1984

@4Dgifts points out a December 1994 debate where the norm clearly was not full-disclosure. The author even suggests blackhats masquerade as whitehats to get early access to exploits.

All that aside, it is not my position to send out full disclosure, much as I might like to. What I sent to CERT was properly channeled through SCO’s CERT contact. CERT is a recognized and official carrier for such materials. 8LGM is, I don’t know, some former “black hat” types who are trying pretty hard to wear what looks like a “white hat” these days, but who can tell? If CERT believes in you then I assume you’ll be receiving a copy of my paper from them; if not, well, I know you’re smart enough to figure it out anyway.

[…]

Have a little patience. Let the fixed code propagate for a while. Give administrators in far off corners of the world a chance to hear about this and put up defenses. Also, let the gory details circulate via CERT for a while — just because SCO has issued fixes does not mean there aren’t other vendors whose code is still vulnerable. If you think this leaves out the freeware community, think again. The people who maintain the various login suites and other such publically available utilities should be in contact with CERT just as commercial vendors are; they should receive this information through the same relatively secure conduits. They should have a chance to examine their code and if necessary, distribute corrected binaries and/or sources before disclosure. (I realize that distributing fixed sources is very similar to disclosure, but it’s not quite the same as posting exploitation scripts).

US President Calls for Federal 30-day Breach Notice

Today the US moved closer to a federal consumer data breach notification requirement (healthcare has had a federal requirement since 2009 — see Eisenhower v Riverside for why healthcare is different from consumer).

PC World says a presentation to the Federal Trade Commission sets the stage for a Personal Data Notification & Protection Act (PDNPA).

U.S. President Barack Obama is expected to call Monday for new federal legislation requiring hacked private companies to report quickly the compromise of consumer data.

Every state in America has had a different approach to breach deadlines, typically led by California (starting in 2003 with SB1386 consumer breach notification), and more recently led by healthcare. This seems like an approach that has given the Feds time to reflect on what is working before they propose a single standard.

In 2008 California moved to a more aggressive 5-day notification requirement for healthcare breaches after a crackdown on UCLA executive management missteps in the infamous Farah Fawcett breaches (under Gov Schwarzenegger).

California this month (AB1755, effective January 2015, approved by the Governor September 2014) relaxed its healthcare breach rules from 5 to 15 days after reviewing 5 years of pushback on interpretations and fines.

For example, in April 2010, the CDPH issued a notice assessing the maximum $250,000 penalty against a hospital for failure to timely report a breach incident involving the theft of a laptop on January 11, 2010. The hospital had reported the incident to the CDPH on February 19, 2010, and notified affected patients on February 26, 2010. According to the CDPH, the hospital had “confirmed” the breach on February 1, 2010, when it completed its forensic analysis of the information on the laptop, and was therefore required to report the incident to affected patients and the CDPH no later than February 8, 2010—five (5) business days after “detecting” the breach. Thus, by reporting the incident on February 19, 2010, the hospital had failed to report the incident for eleven (11) days following the five (5) business day deadline. However, the hospital disputed the $250,000 penalty and later executed a settlement agreement with the CDPH under which it agreed to pay a total of $1,100 for failure to timely report the incident to the CDPH and affected patients. Although neither the CDPH nor the hospital commented on the settlement agreement, the CDPH reportedly acknowledged that the original $250,000 penalty was an error discovered during the appeal process, and that the correct calculation of the penalty amount should have been $100 per day multiplied by the number of days the hospital failed to report the incident to the CDPH for a total of $1,100.

It is obvious too long a timeline hurts consumers. Too short a timeline has been proven to force mistakes with covered entities rushing to conclusion then sinking time into recovering unjust fines and repairing reputation.

Another risk with too short timelines (and complaint you will hear from investigation companies) is that early-notification reduces good/secret investigations (e.g. criminals will erase tracks). This is a valid criticism, however it does not clearly outweigh benefits to victims of early notification.

First, a law-enforcement delay caveat is meant to address this concern. AB1755 allows a report to be submitted 15 days after the end of a law-enforcement imposed delay period, similar to caveats found in prior requirements to assist important investigations.

Second, we have not seen huge improvements in attribution/accuracy after extended investigation time, mostly because politics start to settle in. I am reminded of when Walmart in 2009 admitted to a 2005 breach. Apparently they used the time to prove they did not have to report credit card theft.

Third, value relative to the objective of protecting data from breach. Consider the 30-day Mandiant 2012 report for South Carolina Department of Revenue. It ultimately was unable to figure out who attacked (although they still hinted at China). It is doubtful any more time would have resolved that question. The AP has reported Mandiant charged $500K or higher and it also is doubtful many will find such high costs justified. Compare their investigation rate with the cost of improving victim protection:

Last month, officials said the Department of Revenue completed installing the new multi-password system, which cost about $12,000, and began the process of encrypting all sensitive data, a process that could take 90 days.

I submit to you that a reasonably short and focused investigation time saves money and protects consumers early. Delay for private investigation brings little benefit to those impacted. Fundamentally who attacked tends to be less important that how a breach happened; determining how takes a lot less time to investigate. As an investigator I always want to get to the who, yet I recognize this is not in the best interest of those suffering. So we see diminishing value in waiting, increased value in notification. Best to apply fast pressure and 30 days seems reasonable enough to allow investigations to reach conclusive and beneficial results.

Internationally Singapore has the shortest deadline I know of with just 48-hours. If anyone thinks keeping track of all the US state requirements has been confusing, working globally gets really interesting.

Update, Jan 13:

Brian Krebs blogs his concerns about the announcement:

Leaving aside the weighty question of federal preemption, I’d like to see a discussion here and elsewhere about a requirement which mandates that companies disclose how they got breached. Naturally, we wouldn’t expect companies to disclose publicly the specific technologies they’re using in a public breach document. Additionally, forensics firms called in to investigate aren’t always able to precisely pinpoint the cause or source of the breach.

First, federal preemption of state laws sounds worse than it probably is. Covered entities of course want more local control at first, to weigh in heavily on politicians and set the rule. Yet look at how AB1755 in California unfolded. The medical lobby tried to get the notification moved from 5 days to 60 days and ended up on 15. A Federal 30 day rule, even where preemptive, isn’t completely out of the blue.

Second, disclosure of “how” a breach happened is a separate issue. The payment industry is the most advanced in this area of regulation; they have a council that releases detailed methods privately in bulletins. The FBI also has private methods to notify entities of what to change. Even so, generic bulletins are often sufficient to be actionable. That is why I mentioned the South Carolina report earlier. Here you can see useful details are public despite their applicability:

Mandiant Breach Report on SCDR

Obama also today is expected to make a case in front of the NCCIC for better collaboration between private and government sectors (Press Release). This will be the forum for this separate issue. It reminds me of the 1980s debate about control of the Internet led by Rep Glickman and decided by President Reagan. The outcome was a new NIST and the awful CFAA. Let’s see if we can do better this time.

Letters From the Whitehouse:

The (Secret) History of the Banana Split

Executive summary: The popular desert called “banana split” is a by-product or modern representation of America’s imperialist expansion and corporate-led brutal subjugation of freedoms in foreign nations during the early 1900s.

Inexpensive exotic treat drugstore ad
Inexpensive exotic treat drugstore ad
Long form: If there is a quintessential American dessert it is the banana split.

But why?

Although we can go way back to credit Persians and Arabs with invention of ice-cream (nice try China) the idea of putting lots of scoops of the stuff on top of a split banana “vessel” covered in sweet fruits and syrups… surely that over-extravagance derives from American culture.

After reading many food history pages and mulling their facts a bit I realized something important was out of place.

There had to be more to this story than just Americans had abundance and desire — all their fixings smashed together — and that one day someone put everything into one desert.

Again why exactly in America? And perhaps more importantly, when?

I found myself digging around for history details and eventually ended up with this kind of official explanation.

In 1904 in Latrobe, the first documented Banana Split was created by apprentice pharmacist David Strickler — sold here at the former Tassell Pharmacy. Bananas became widely available to Americans in the late 1800s. Strickler capitalized on this by cutting them lengthwise and serving them with ice cream. He is also credited with designing a boat-shaped glass dish for his treat. Served worldwide, the banana split has become a prevalent American dessert.

The phrase that catches my eye, almost lost among the other boring details, is that someone with an ingredient “widely available…capitalized”; capitalism appears to be the key to unlock this history.

And did someone say boat?

Immigration and Trade

Starting with the ice cream, attribution goes first to Italian immigrants who brought spumoni to America around the 1870s.

This three flavor ice-cream often was in colors of their home country’s flag (cherry, pistachio, and either chocolate or vanilla ice creams…red, green, and, sometimes, white). Once in America this Italian tradition of a three flavor treat was taken and adapted to local tastes: chocolate, strawberry and vanilla. Ice-cream became far more common and widely available by the 1880s so experimentation was inevitable as competition boomed. It obviously was a very popular food by the 1904 St. Louis World’s Fair, which famously popularized eating out of Italian waffle “cones”.

In parallel, new trade developments emerged. Before the 1880s there were few bananas found in America. America bought around $250K of bananas in 1871. Only thirty years later the imports had jumped an amazing 2,460% to $6.4m and were in danger of becoming too common on their own.

Bananas being both easily sourced and yet still exotic made them ideal for experiments with ice-cream. The dramatic change in trade and availability was the result of a corporate conglomerate formed in 1899 called the United Fruit Company. I’ll explain more about them in a bit.

At this point what we’re talking about is just Persian/Arab ice-cream modified and brought by Italian immigrants to America, then modified and dropped onto a newly available North American (Central, if you must) banana of capitalism, on a boat-shaped dish to represent far-away origins.

Serving up these fixings as the novel banana split makes a lot of sense when you put yourself in the shoes of someone working in a soda/pharmacy business of 1904 trying to increase business by offering some kind of novel or trendy treat.

Bananas and Pineapples Were an Exotic New Thing to Americans

Imagine you’re in a drug-store and supposed to be offering something “special” to draw in customers. People could go to any drugstore, what can you dazzle them with?

You pull out this newly available banana fruit, add the three most-popular flavors (not completely unfamiliar, but a lot all at one time) and then dump all the sauces you’ve got on top. You now charge double the price of any other dessert. Would you add pineapple on top? Of course!

The pineapple had just arrived fresh off the boat in a new promotion by the Dole corporation:

In 1899 James Dole arrived in Hawaii with $1000 in his pocket, a Harvard degree in business and horticulture and a love of farming. He began by growing pineapples. After harvesting the world’s sweetest, juiciest pineapples, he started shipping them back to mainland USA.

I have mentioned before on this blog how the US annexed Hawaii by sending in the Marines. Food historians rarely bother to talk about this side of the equation, so indulge me for a moment. Interesting timing of the pineapple, no? I sense a need for a story about the Dole family to be told.

The Dole Family

The arrival of James Dole to Hawaii in 1899, and a resulting sudden widespread availability of pineapples in drugstores for banana splits, is a dark chapter in American politics.

James was following the lead of his cousin Sanford Ballard Dole, who had been born in Hawaii in 1844 to Protestant missionaries and nursed by native Hawaiians after his mother died at childbirth. Sanford was open about his hatred of the local government and had vowed to remove and replace them with American immigrants, people who would help his newly-arrived cousin James viciously protect their accumulation of family wealth.

James Dole pictured grabbing a pineapple: "I swear I just was examining this large juicy warm fruit for quality"
James Dole pictured grabbing a pineapple: “I swear I just was examining large juicy warm fruit for quality”

1890 American Protectionism and Hawaiian Independence

To understand the shift Dole precipitated and participated in, back up from 1899 to the US Republican Congress in 1890 approving the McKinley Tariff. This raised the cost of imports to America 40-50%, striking fear into Americans trying to profit in Hawaii by exporting goods. Although that Tariff left an exception for sugar it still explicitly removed Hawaii’s “favored status” and rewarded domestic production.

Within two years after the Tariff sugar exports from Hawaii had dropped a massive 40% and threw the economy into shock. Plantations run by white American businessmen quickly cooked up ideas to reinstate profits; their favored plan was to remove Hawaii’s independence and deny sovereignty to its people.

At the same time these businessmen were cooking up plans to violently end Hawaiian independence, Queen Lili`uokalani ascended to the throne and indicated she would reduce foreign interference on the country by drafting a new constitution.

These two sides were on a collision course for disaster in 1892 despite the US government shifting dramatically towards Democratic control (leading straight to the 1894 repeal of the McKinley Tariff). The real damage of the Republican platform was Dole could falsely use his own party’s position as a shameless excuse to call himself a victim needing intervention. As Hawaii’s new ruler hinted more national control was needed the foreign businessmen in Hawaii begged America for annexation to violently cement their profitability and remove self-rule.

It was in this context that in early 1893 a loyalist policeman accidentally noticed large amounts of ammunition being delivered to businessmen planning a coup, so he was shot and killed. The pretext of armed “uprising” was used to force the Queen to abdicate power to a government inserted by the sugar barons, led by Sanford Dole. US Marines stormed the island to ensure protecting the interests of elitist foreign businessmen exporting sugar to America, despite only recently operating under a government that wanted a reduction of imports. Sanford’s pro-annexation government, ushered in by shrewd political games and US military might, now was firmly in place as he had vowed.

The Hawaiian nation’s fate seemed sealed already, yet it remained uncertain through the “Panic of 1893” and depression of the 1890s. By 1896 a newly elected US President (Republican McKinley) openly opposed by principle any imperialism and annexation. He even spoke of support for the Queen of Hawaii. However congressional (Republican) pressure mounted in opposition to him and through 1897 the President seemed less likely to fight the annexation lobby.

Finally, as war with Spain unfolded in 1898, Hawaii was labeled as strategically important and definitively lost its independence due to the American military. Ironically, it would seem, as the US went to war with Spain on the premise of ending increasingly brutal suppression of the Cuban independence movement since 1895.

Few Americans I speak with realize that their government basically sent military forces to annex Hawaii based on protection of profits by American missionaries and plantation owners delivering sugar to the US, and then sealed the annexation as convenient for war (even though annexation officially completed after Dewey had defeated the Spanish in Manila Bay and war was ending).

The infamous Blount (arguably a partial voice in these matters, yet also more impartial than the pro-annexation Morgan who has been used improperly to criticize Blount) documented evidence like this:

Total Control Over Fruit Sources

Ok, segue complete, remember how President Sanford’s cousin James arrived in Hawaii in 1899 ready to start shipments of cheap pineapples? His arrival and success was a function of that annexation of the independent state; creation of a pro-American puppet government lured James to facilitate business and military interests.

This is why drugstores in 1904 suddenly found ready access to pineapple to dump on their bananas with ice cream. And speaking of bananas, their story is quite similar. The United Fruit Company I mentioned at the start quickly was able to establish US control over plantations in many countries:

Exports of the UFC "Great White Fleet"
Exports of the UFC “Great White Fleet”

  • Columbia
  • Costa Rica
  • Cuba
  • Jamaica
  • Nicaragua
  • Panama
  • Santo Dominica
  • Guatemala

Nearly half of Guatemala fell under control of the US conglomerate corporation, apparently, and yet no taxes had to be paid; telephone communications as well as railways, ports and ships all were owned by United Fruit Company. The massive level of US control initially was portrayed as an investment and benefit to locals, although hindsight has revealed another explanation.

“As for repressive regimes, they were United Fruit’s best friends, with coups d’état among its specialties,” Chapman writes. “United Fruit had possibly launched more exercises in ‘regime change’ on the banana’s behalf than had even been carried out in the name of oil.” […] “Guatemala was chosen as the site for the company’s earliest development activities,” a former United Fruit executive once explained, “because at the time we entered Central America, Guatemala’s government was the region’s weakest, most corrupt and most pliable.”

Thus the term “banana republic” was born to describe those countries under the thumb of “Great White” businessmen.

US "Great White" power over foreign countries
The “Great White” map of UFC power over foreign countries

And while saying “banana republic” was meant by white businessmen intentionally to be pejorative and negative, it gladly was adopted in the 1980s by a couple Americans. Their business model was to travel the world and blatantly “observe” clothing designs in other countries to resell as a “discovery” to their customers back home. Success at appropriation of ideas led to the big brand stores selling inexpensive clothes that most people know today, found in most malls. The irony of saying “banana republic” surely has been lost on everyone, just like “banana split” isn’t thought of as a horrible reminder of injustices.

Popularity of “banana republic” labels and branding, let alone a dessert, just proves how little anyone remembers or cares about the cruel history behind these products and terms.

Nonetheless, you know now the secret behind widespread availability of inexpensive ingredients that made this famous and iconic American dessert possible and popular.