Category Archives: Security

Nord Stream Pipeline Explosion Explained: Economic Power Defeated by Brains

The latest investigative reporting in The Atlantic reveals how sabotage of huge gas pipes under the sea was likely the work of just a few very smart people in a sailboat.

My favorite part of the story are the social engineering tricks used to track down the exact boat.

…she played stupid. She knew that the boating communities of north Germany were still almost exclusively male, and decided that pretending ignorance would suit their expectations.

A typical conversation went like this: “I want to rent a boat this year, and my friends, they rented a boat called Andromeda last year,” she would begin, explaining that her friends had been “so happy with it.” Then she said she didn’t know any details about the boat, even whether it was a motorboat or a sailboat.

“Well, a sailing boat usually has a mast on it,” one of the charter officials told her.

She quickly found what she was looking for.

It’s a fantastic article with extremely good analysis. However, I will say the author entirely misses a crucial precedent from 2008.

…four CIA spies died when they sailed into a tropical storm on daring mission to plant listening pod disguised as a rock on seabed…

Sailing into Tropical Storm Higos was not smart, which is why we know so much about it.

The Atlantic article gives a lot of focused attention on diving to the Nord Stream Pipeline, much more than use of long lines and remote controls. It’s entirely possible to inexpensively avoid diving while placing explosives 300ft under the surface. The author even describes the construction of the pipeline on the surface in terms of a simple engineering design that could be used to destroy it on the seabed, but never puts the two together.

I’m also reminded of a post I wrote a while ago about the Vietnam War, with modern armies thinking about future conflict in terms of needing brains more than brawn.

When you really get into reading Mrazek, you have to wonder why he didn’t call his 1968 thesis the war of art:

The impotence of the American juggernaut in Vietnam has put this problem under the spotlight of history. The one thing the guerrillas have in abundance is imagination, and this seems to outweigh the imbalance in materiel. It is the author’s contention that creativity is what wins battles–the same faculty that inspires great art.

All this means really that Russia is in deep trouble.

Its dictator has spent decades destroying any ability to think creatively (e.g. undermining threats to dictatorship) in order to drive a sad state of fealty (e.g. coin-operated politicians he controls with assassinations).

On that note, the least creative political party in the world (pro-Putin GOP) appears to be trying to use its economic power to help this dictator and his thoughtless hordes lose their wars more slowly and at an even higher cost.

Every Tesla Recalled Due to High and Rising “Autopilot” Crashes

The 2013 Tesla future-leaning hot-take sales pitches — the promise that rapidly throwing AI into cars magically would solve all dangers of driving — have been declared an official flop. As if we couldn’t already tell by 2016 it would make safety worse, given security analysis of the first Autopilot fatalities.

Source: Consumer Reports

Mounting safety tragedies including loss of lives were apparently all easily avoidable, related to some dumb engineering mistakes that regulators are still chasing down.

Here comes the NHTSA, uncorked from Tesla’s 2016 government corruption schemes. It no longer ignores the fact that millions of dangerous loitering missiles on public roads are prone to sudden fatal failure.

Source: NHTSA

The resulting software update doesn’t get at the underlying design problem, an inability of the Tesla to recognize and avoid crashing into everything.

Notably, vehicle deaths in America have been rising rapidly, attributed to things like the high-risk low-care false overconfidence and constant distraction or just plain negligence of drivers in a Tesla.

The more Tesla on the roads, the more tragic deaths from Tesla. Without fraud there would be no Tesla. Source: Tesladeaths.com
Introduction of the Tesla to public roads in the U.S. started the same time as a massive increase in crashes and fatalities. Coincidence? Source: NYT

Or as researchers wrote earlier this year, data shows Tesla ADAS increased the rate of crashes while promising to be the solution.

Although Level 2 vehicles were claimed to have a 43% lower crash rate than Level 1 vehicles, their improvement was only 10% after controlling for different rates of freeway driving. Direct comparison with general public driving was impossible due to unclear crash severity thresholds in the manufacturer’s reports, but analysis showed that controlling for driver age would increase reported crash rates by 11%.

No surprise then, that the years late and very low-quality Tesla CyberTruck is predicted to climb towards the worst vehicle safety record in history, a looming danger especially to the most vulnerable populations on public roads.

How To Build a Car That Kills People: Cybertruck Edition. The Cybertruck represents a lot of what’s wrong with the U.S. transportation system — even as it purports to address those problems.

ChatGPT Fails at Basic American Slavery History

Two quick examples.

First example, I feed ChatGPT a prompt from some very well known articles in 2015. Here I put a literal headline into the prompt.

No historical evidence? That’s a strong statement, given that I just gave it an exact 2015 headline from historians providing historical evidence.

Notably ChatGPT not only denies history, it tries to counter-spin the narrative into a falsely generated one. To my eyes this is like if the LLM started saying there’s no historical evidence of the Holocaust and in fact Hitler is known for taking steps toward freedom for Jews (i.e. “Arbeit Macht Frei”).

NO. NO. and NO.

Then I give ChatGPT another chance.

Note that my intentionally broken “rica Armstrong Dunbar” gets a response of “I don’t have information about Erica Armstrong Dunbar”. Aha! Clearly ChatGPT DOES know the distinguished Charles and Mary Beard Professor of History at Rutgers, while claiming not to understand at all what she wrote.

Update since 2022?

Ok, sure. Here’s the 2017 award-winning book by Dunbar giving extensive historical evidence on Washington’s love of slavery.

Then I prompt ChatGPT with the idea that it has told me a lie, because Dunbar gives historical evidence of Washington working hard to preserve and expand slavery.

ChatGPT claiming there is “no historical evidence” does NOT convey to me that interpretations may vary. To my eyes that’s an elimination of an interpretation.

It clearly and falsely states there is no evidence, as if to argue against the interpretation and bury interest in it, even though it definitely knows evidence DOES exist.

ChatGPT incorrectly denied the existence of evidence and presented a specific counter-interpretation of Washington, a view contradicted by the evidence it sought to suppress. Washington explicitly directed for his slaves NOT to be set free after his death, and it was his wife who disregarded these instructions and emancipated them instead. To clarify, Washington actively opposed the liberation of slaves (unlike his close associate Robert Carter, who famously emancipated all he could in 1791). Only after Washington’s death and because of it, which some allege was caused by his insistence to oversee his slaves perform hard outdoor labor on a frigid winter day, was emancipation genuinely entertained.

Hard to see ChatGPT trying to undermine a true fact in history, while promoting a known dubious one, as just some kind of coincidence.

Moving on to the second example, I feed ChatGPT a prompt about America’s uniquely brutal and immoral “race breeding” version of slavery.

It’s history topics like this that gets my blog rated NSFW and banned in some countries (looking at you Virgin Media UK).

At first I’m not surprised that ChatGPT tripped over my “babies for profit” phrase.

In fact, I expected it to immediately flag the conversation and shut it down. Instead you can plainly see above it tries to fraudulently convince me that American slavery was only about forced labor. That’s untrue. American slavery is uniquely and fundamentally defined by its cruel “race breeding“.

The combined value of enslaved people exceeded that of all the railroads and factories in the nation. New Orleans boasted a denser concentration of banking capital than New York City. […] When an accountant depreciates an asset to save on taxes or when a midlevel manager spends an afternoon filling in rows and columns on an Excel spreadsheet, they are repeating business procedures whose roots twist back to slave-labor camps. […] When seeking loans, planters used enslaved people as collateral. Thomas Jefferson mortgaged 150 of his enslaved workers to build Monticello. People could be sold much more easily than land, and in multiple Southern states, more than eight in 10 mortgage-secured loans used enslaved people as full or partial collateral. As the historian Bonnie Martin has written, “slave owners worked their slaves financially, as well as physically from colonial days until emancipation” by mortgaging people to buy more people.

And so I prompt ChatGPT to take another hard look at its failure to comprehend the racism-for-profit embedded in American wealth. Second chance.

It still seems to be trying to avoid a basic truth of that phrase, as if it is close to admitting the horrible mistake it’s made. And yet for some reason it fails to include state-sanctioned rape or forced birth for profit in its list of abuses of American women held hostage.

Everyone should know that after the United States in 1808 abolished the importation of humans as slaves, “planters” were defined by the wealth they generated from babies born in bondage. This book from 2010 by Marie Jenkins Schwartz, Associate Professor of History at the University of Rhode Island, spells it out fairly clearly.

Another chance seems in order.

Look, I’m not trying to be seen as correct, I’m not trying to make a case or argument to ChatGPT. My prompts are dry facts to see how ChatGPT will expand on them. When it instead chokes, I simply am refusing to be sold a lie generated by this very broken and usafe machine (a product of the philosophy of the engineers who made it).

I’m wondering why ChatGPT can’t “accurately capture the exploitive nature” of slavery without my steadfast refusal to accept its false statements. It knows a correct narrative and will reluctantly pull it up, apparently trained to emphasize known incorrect ones first.

It’s a sadly revisionist system, which seems to display an intent to erase the voices of Black women in America: misogynoir. Did any Black women work at the company that built this machine that erases them by default?

When I ask ChatGPT about the practice of “race breeding” it pretends like it never happened and slavery in America was only about labor practices. That’s basically a kind of targeted disinformation that will drive people to think incorrectly about a very well-known tragedy of American history, as it obscures or even denies a form of slavery uniquely awful in history.

What would Ona Judge say? She was a “mixed race” slave (white American men raped Black women for profit, breeding with them to sell or exploit their children) that by Washington’s hand as President was never freed, still regarded a fugitive slave when she died nearly 50 long years after Washington.

Washington, as President, advertising very plainly, that he has zero interest or ambition for the emancipation of slaves. Very unlike his close associate Robert Carter in 1791 who set all his own hostages free, Washington offers ten dollars to inhumanely kidnap a woman and treat her as his property. Historians say she fled when she found out Washington intended to gift her to his son-in-law to rape her and sell her children. Source: Pennsylvania Gazette, 24 May 1795

Any AI System NOT Provably Anti-Racist, is Provably Racist

Software that is not provably anti-vulnerability, is vulnerable. This should not be a controversial statement. In other words, a breach of confidentiality is a discrete, known event related to lack of anti-vulnerability measures.

Expensive walls rife with design flaws were breached an average of 3 times per day for 3 years. Source: AZ Central (Ross D. Franklin, Associated Press)

Likewise AI that is not provably anti-racist, is racist. This also should not be a controversial statement. In other words, a breach of integrity is a discrete, known event related to a lack of anti-racism measures.

Greater insights into the realm of risk assessment we’re entering into is presented in an article called “Data Governance’s New Clothes

…the easiest way to identify data governance systems that treat fallible data as “facts” is by the measures they don’t employ: internal validation; transparency processes and/or communication with rightsholders; and/or mechanisms for adjudicating conflict, between data sets and/or people. These are, at a practical level, the ways that systems build internal resilience (and security). In their absence, we’ve seen a growing number and diversity of attacks that exploit digital supply chains. Good security measures, properly in place, create friction, not just because they introduce process, but also because when they are enforced; they create leverage for people who may limit or disagree with a particular use. The push toward free flows of data creates obvious challenges for mechanisms such as this; the truth is that most institutions are heading toward more data, with less meaningful governance.

Identifying a racist system involves examining various aspects of society, institutions, and policies to determine whether they perpetuate racial discrimination or inequality. The presence of anti-racism efforts is a necessary indicator such that the absence of any explicit anti-racist policies alone may be sufficient to conclude a system is racist.

Think of it like a wall that has no evidence of anti-vulnerability measures. The evidence of absence alone can be a strong indicator the wall is vulnerable.

For further reading about what good governance looks like, consider another article called “The Tech Giants’ Anti-regulation Fantasy

Major internet companies pretend that they’re best left alone. History shows otherwise.

Regulators can identify the racist system by its distinct lack of anti-racism, as well as those in charge of the system. Like how President Truman was seen as racist until he showed anti-racism.