Category Archives: Sailing

ChatGPT Still Fails at Even Basic Ciphers (Broken Caesar)

I’m noticing again that ChatGPT is so utterly broken that it can’t even correctly count and track the number of letters in a word, and it can’t tell the difference between random letters and a word found in a dictionary.

Here’s a story about the kind of atrociously low “quality” chat it provides, in all its glory. Insert here an image of a toddler throwing up after eating a whole can of alphabet soup

Ready?

I prompted ChatGPT with a small battery of cipher tests for fun, thinking I’d go through them all again to look for any signs of integrity improvement in the past year. Instead it immediately choked and puked up nonsense on the first and most basic task, in such a tragic way the test really couldn’t get started.

It would be like asking a student in English class, after a year of extensive reading, to give you the first word that comes to mind and they say “BLMAGAAS”.

F. Not even trying.

In other words (pun not intended) when ChatGPT was tested with a well-known “Caesar” substitution that shifts the alphabet three stops to encode FRIENDS (7 letters) it suggested ILQGHVLW (8 letters).

I had to hit the emergency stop button. I mean think about this level of security failure where a straight substitution of 7 letters becomes 8 letters.

If you replace each letter F-R-I-E-N-D-S with a different one, that means 7 letters returns as 7 letters. It’s as simple as that. Is there any possible way to end up with 8 instead? No. Who could have released this thing to the public when it tries to pass 8 letters off as being the same as 7 letters?

I immediately prompted ChatGPT to try again, thinking there would be improvement. It couldn’t be this bad, could it?

It confidently replied that ILQGHVLW (8 letters) deciphers to the word FRIENDSHIP (10 letters). Again the number of letters is clearly wrong, as you can see me replying.

And also noteworthy is that it was claiming to have encoded FRIENDS, and then decoded it as the word FRIENDSHIP. Clearly 7 letters is neither 8 nor 10 letters.

Excuse me?

The correct substitution of FRIENDS is IULHQGV, which you would expect this “intelligence” machine to do without fail.

It’s trivial to decode ChatGPT’s suggestion of ILQGHVLW (using 3 letter shift of the alphabet) as a non-word. FRIENDS should not encode and then decode as an unusable mix of letters “FINDESIT”.

How in the world did the combination of letters FINDESIT get generated by the word FRIENDS, and then get shifted further into the word FRIENDSHIP?

Here’s another attempt. Note below that F-R-I-E-N-D-S shifted three letters to the right becomes I-U-L-H-Q-G-V, which unfortunately is NOT the answer that ChatGPT responds with.

Why do those last three letters K-A-P get generated by ChatGPT for the cipher?

WRONG, WRONG, WRONG.

Look at the shift. The (shifted) letters K-A-P very obviously get decoded to the (original) letters H-X-M, which would leave us with a decoded F-R-I-E-H-X-M.

FRIEHXM. Wat. ChatGPT “knows” the input was FRIENDS, and it “knows” deciphering fails if different.

Upon closer inspection, I noticed how these last three letters were oddly inverted. The encoding process opaquely flipped itself backward. That’s how it encoded a non-word F-R-I-E…K-A-P.

In simpler terms, ChatGPT flipped itself into a reverse gear half-way, incorrectly using N->K (shift left 3 letters) instead of the correct encoding N->Q (shift right 3 letters).

Thus, in cases where it starts with a shift key of F->I, we see a very obvious and easy to explain mistake of K->N (abrupt inversion of the key, shift left 3 letters).

Given there’s no H-X-M in FRIENDS… hopefully you grasp the issue with claiming a K-A-P where the first letter F was encoded as I, and understand how the simple substitution is so blatantly incorrect.

This may seem long-winded, yet it represents a highly problematic and faulty logic inversion at the most simple stage of test. Imagine trying to explain integrity failure of a far more complex subject with multi-layered and historical encoding like health or civil rights.

There are very serious integrity breach implications here.

Can anyone imagine a calculator company boasting a rocket-like valuation to billions of users and dollars invested by Microsoft and then presenting…

Talk about zero trust (pun not intended), as explained in “An Independent Evaluation of ChatGPT on Mathematical Word Problems”.

We found that ChatGPT’s performance changes dramatically based on the requirement to show its work, failing 20% of the time when it provides work compared with 84% when it does not. Further several factors about MWPs relating to the number of unknowns and number of operations that lead to a higher probability of failure when compared with the prior, specifically noting (across all experiments) that the probability of failure increases linearly with the number of addition and subtraction operations.

We are facing a significant security failure that cannot be emphasized enough as truly dangerous to release to the public without serious caution.

When ChatGPT provides inaccurate or nonsensical answers, such as stating “42” as the answer to the meaning of life, or asserting that “2+2=gobble,” some people are too quick to accept such instances as evidence that only certain/isolated functions are unreliable, as if there must be some vague greater good (like hearing the awful fallacy that at least fascists made the trains run on time).

Similarly, when ChatGPT fails in a serious manner, such as generating harmful content related to racism or societal harm, it is often too easily waved away or made worse.

In order to make ChatGPT less violent, sexist, and racist, OpenAI hired Kenyan laborers, paying them less than $2 an hour. The laborers spoke anonymously… describing it as “torture”…

At a certain point, we need to question why the standard for measuring harm is being so aggressively lowered to the extent that a product is persistently toxic for profits without any real sense of accountability.

Back in 1952, tobacco companies spread Ronald Reagan’s cheerful image to encourage cigarette smoking, preying on people’s weaknesses. What’s more, they employed a deceptive approach, distorting the truth to undercut the unmistakable and emphatic scientific health alerts about cancer at the time. Their deliberate strategy involved manipulating the criteria for assessing harm. They were well aware of their tactics.

Ronald Reagan played a significant role in exploitation campaigns, which are claimed to have caused the deaths of at least 16 million Americans. It wasn’t until data integrity controls were strengthened that the vulnerability was addressed.

This is the level of massive integrity breach that may be necessary to contextualize the “attraction” to OpenAI. A “three sheets to the wind” management of public risk also reminds me of CardSystems level of negligence to attend to basic security.

Tens of Millions of Consumer Credit and Debit Card Numbers Compromised

The CardSystems incident was pivotal, underscoring the undeniable harms associated with it. Sixteen million Americans succumbed to tobacco-related deaths over decades, then tens of millions of American payment cards were compromised in systems-related breaches over years.

Although these were distinct issues, they shared a common thread of need for regulatory intervention and showed accelerations of harm from inaction, which is very much what OpenAI should be judged against. Look at the heavily studied Chesterfield ad above one more time, and then take a long look at this:

The last time big companies blew this much smoke, sixteen million Americans died.

Honestly I expected ChatGPT to complain that the Chesterfield ad with Ronald Reagan was running the same year in direct response to scientific study, not two years after. Here’s how Bing’s AI chat handled the same question, for comparison.

Did you expect Microsoft AI to promote smoking? You probably should now.

Microsoft seems to be actively promoting smoking to users as a cute commentary, arguably far worse than OpenAI forgetting whether Reagan promoted it. Also the Christmas ad campaign in question was not 1948, it was 1952. Bing failed to process a correct year. Alas, these AI systems pump into the public obvious integrity failures one after another.

The tobacco industry’s program to engineer the science relating to the harms caused by cigarettes marked a watershed in the history of the industry. It moved aggressively into a new domain, the production of scientific knowledge, not for purposes of research and development but, rather, to undo what was now known: that cigarette smoking caused lethal disease. If science had historically been dedicated to the making of new facts, the industry campaign now sought to develop specific strategies to “unmake” a scientific fact.

The very large generative AI vendors fit only too neatly into what you can see was described in the above quote as a production process to “‘unmake’ a scientific fact“… and for financial gain.

In 1775, in his book, Chirurgical Observations, London physician Percival Pott noted an unusually high incidence of scrotal cancer among chimney sweeps. He suggested a possible cause… an environmental cause of cancer was involved. Two centuries later, benzo(a)pyrene, a powerful carcinogen in coal tar, was identified as the culprit.

Carcinogens of tar were studied and known harmful since the late 1700s? The timing of scientific fact gathering for “intelligence” sounds very similar to a worldwide abolition of selling humans (another ChatGPT test it failed), except that somehow selling tobacco was continued another 100 years longer than slavery, while killing tens of millions of people.

Let’s go back to considering the magnitude of negligence in privacy breaches of trust like CardSystems, let alone the creepily widespread and subtle ones like the privacy risk of Google calculator.

Map of Google calculator network traffic flows

Everyone now needs to brace themselves for low-integrity products such as the AI calculator that can’t do math — failure to deliver information reliably with quality control — perhaps racing us toward the highest levels of technology mistrust in history. Unless there’s an intervention compelling AI vendors to adhere to basic ethics, establishing baseline integrity control requirements such as how cholera was proven unsafe in water, safety failures are poised to escalate significantly.

The landscape of security controls to prevent privacy loss underwent a significant transformation in response to the enactment of California’s SB1386, a necessary change and driven only by the breach laws and their implications. After 2003 the term “breach” took on a more concrete and enforceable significance in relation to potential dangers and risks. Companies finally found themselves compelled to take fast action to prevent their own market from collapsing due to predictable lack of trust.

But twenty years ago the breach regulators were focused entirely on confidentiality (privacy)… and now we are deep into the era of widespread and PERSISTENT INTEGRITY BREACHES on a massive scale, an environment seemingly devoid of necessary integrity regulations to maintain trust.

The dangers we’re seeing right here and now in 2023 serve as a stark reminder of the kind of tragically inadequate treatment of privacy in the days before related breach laws were established and enforced.

The good news is there are simple technical solutions to these AI integrity breach risks, almost exactly like there were simple technical solutions to cloud privacy breach risks. Don’t let anyone tell you otherwise. As a journeyman with three decades of professional security work to stop harms (including extensive public writing and speaking), I can explain and prove both solutions immediately viable. It’s like asking me “what’s encryption” in 2003.

The bad news is the necessary innovations and implementations of these open and easy solutions will not happen soon enough without regulation and strong enforcement.

Nord Stream Pipeline Explosion Explained: Economic Power Defeated by Brains

The latest investigative reporting in The Atlantic reveals how sabotage of huge gas pipes under the sea was likely the work of just a few very smart people in a sailboat.

My favorite part of the story are the social engineering tricks used to track down the exact boat.

…she played stupid. She knew that the boating communities of north Germany were still almost exclusively male, and decided that pretending ignorance would suit their expectations.

A typical conversation went like this: “I want to rent a boat this year, and my friends, they rented a boat called Andromeda last year,” she would begin, explaining that her friends had been “so happy with it.” Then she said she didn’t know any details about the boat, even whether it was a motorboat or a sailboat.

“Well, a sailing boat usually has a mast on it,” one of the charter officials told her.

She quickly found what she was looking for.

It’s a fantastic article with extremely good analysis. However, I will say the author entirely misses a crucial precedent from 2008.

…four CIA spies died when they sailed into a tropical storm on daring mission to plant listening pod disguised as a rock on seabed…

Sailing into Tropical Storm Higos was not smart, which is why we know so much about it.

The Atlantic article gives a lot of focused attention on diving to the Nord Stream Pipeline, much more than use of long lines and remote controls. It’s entirely possible to inexpensively avoid diving while placing explosives 300ft under the surface. The author even describes the construction of the pipeline on the surface in terms of a simple engineering design that could be used to destroy it on the seabed, but never puts the two together.

I’m also reminded of a post I wrote a while ago about the Vietnam War, with modern armies thinking about future conflict in terms of needing brains more than brawn.

When you really get into reading Mrazek, you have to wonder why he didn’t call his 1968 thesis the war of art:

The impotence of the American juggernaut in Vietnam has put this problem under the spotlight of history. The one thing the guerrillas have in abundance is imagination, and this seems to outweigh the imbalance in materiel. It is the author’s contention that creativity is what wins battles–the same faculty that inspires great art.

All this means really that Russia is in deep trouble.

Its dictator has spent decades destroying any ability to think creatively (e.g. undermining threats to dictatorship) in order to drive a sad state of fealty (e.g. coin-operated politicians he controls with assassinations).

On that note, the least creative political party in the world (pro-Putin GOP) appears to be trying to use its economic power to help this dictator and his thoughtless hordes lose their wars more slowly and at an even higher cost.

Russian Navy Black Sea Fleet in Giant Retreat: Fleeing Ukrainian Attacks

This is a story about false fear, about Russian propaganda, if you read the Atlantic. Elon Musk infamously tried to boast he was personally going to prevent nuclear war by blocking Ukrainian forces, preventing them from defending civilian areas against Russian attacks.

Musk… called Walter Isaacson, his biographer, and told him there was a “non-trivial possibility” that the sea-drone attack could lead to a nuclear war. According to Isaacson, Musk had recently spoken with Russia’s ambassador in Washington, who had warned him explicitly that any attack on Crimea would lead to nuclear conflict. Musk implied to several other people (though he later denied it) that he had been speaking with President Vladimir Putin around that time as well.

He talked with Russia, they scared him with nonsense, and Musk probably thought he was sounding tough by doing what they said. To the trained ear however Musk sounds like a scared toddler hiding under his blankets, because that’s what comes through in his Moscow-themed depiction of tiny little drone attacks on big mighty ships forcing nuclear war.

Fast forward and the Ukrainian defenses carried on anyway, their drones quietly going around Musk’s chicken-little attempts at blocking them.

Now we see the result clearly, the exact opposite of Musk’s predictions, as documented in the latest news from the Institute for the Study of War:

The Russian military recently transferred several Black Sea Fleet (BSF) vessels from the port in occupied Sevastopol, Crimea to the port in Novorossiysk, Krasnodar Krai, likely in an effort to protect them from continued Ukrainian strikes on Russian assets in occupied Crimea. Satellite imagery published on October 1 and 3 shows that Russian forces transferred at least 10 vessels from Sevastopol to Novorossiysk.[1] The satellite imagery reportedly shows that Russian forces recently moved the Admiral Makarov and Admiral Essen frigates, three diesel submarines, five landing ships, and several small missile ships.[2] Satellite imagery taken on October 2 shows four Russian landing ships and one Kilo-class submarine remaining in Sevastopol.[3] Satellite imagery from October 2 shows a Project 22160 patrol ship reportedly for the first time in the port of Feodosia in eastern Crimea, suggesting that Russian forces may be moving BSF elements away from Sevastopol to bases further in the Russian rear.[4] A Russian think tank, the Center for Analysis of Strategies and Technologies, claimed on October 3 that the BSF vessels’ movements from occupied Sevastopol to Novorossiysk were routine, however.[5] Russian forces may be temporarily moving some vessels to Novorossiysk following multiple strikes on BSF assets in and near Sevastopol but will likely continue to use Sevastopol’s port, which remains the BSF’s base. Former Norwegian Navy officer and independent OSINT analyst Thord Are Iversen observed on October 4 that Russian vessel deployments have usually intensified following Ukrainian strikes but ultimately returned to normal patterns.[6] ISW will explore the implications of Ukrainian strikes on the BSF in a forthcoming special edition.

Not only were Ukrainian defensive methods effective at countering Russian ships, they have proven to be a strong deterrent. Drone attacks are forcing a massive withdrawal of a completely outclassed and vulnerable Russian Navy.

Musk is wrong, wrong, and wrong. The fact he won the lottery once and has used lies and charm to balloon that one windfall into an expanding giant empire of false promises, somehow didn’t set him up for understanding basic military strategy.

Go figure.

Ukranian Drones Struck Russian Warships, Again Proving Elon Musk a Liar

Video grab from drone, moments before it hits a Russian ship. Source: Ukraine security services

Allegedly Elon Musk continuously lies about who told him to protect Russian warships and why he did it. Gaslighting, as usual, the unstable Musk has said both he personally stepped in to help Putin save the Russian ships, and also that he refused to step in and did nothing in order to help Putin save the Russian ships.

Either way, despite flip-flopping like a slimy “big fish” story out of water, it sure sounds like Logan Act time. If he really wants to be remembered in history as someone very unique, that looks like his best fit yet. And despite his self-dealing protests and complaints to silence independent expert reports, journalists are laying bare some actual truths about drone strikes in Crimea.

Here is the part you might not have heard, or not registered: The same team launched a similar attack again a few weeks later. On October 29, a fleet of guided sea drones packed with explosives did reach Sebastopol harbor, using a different communications system. They did hit their targets. They put one Russian frigate, the Admiral Makarov, out of commission. The team believes that they damaged at least one submarine and at least two other boats as well.

And then? Nuclear war did not follow. Despite Musk’s fears, in other words—fears put into his head by the Russian ambassador, or perhaps by Putin himself—World War III did not erupt as a result of this successful attack on a Crimean port. Instead, the Russian naval commanders were spooked by the attack, so much so that they stuck close to Sebastopol harbor over the following weeks.

What would happen if Ukrainian drones struck Russian warships?

They did strike.

We don’t have to wonder.

BBC Verify’s research suggests Ukraine has carried out at least 13 attacks with sea drones – targeting military ships, Russia’s naval base in Sevastopol, and Novorossiysk harbour. This is based on announcements by Russian and Ukrainian authorities, and local media reports.

And we certainly don’t have to listen to Elon Musk, a proven serial liar. We know he personally interfered with U.S. foreign policy by directly negotiating with Russian officials to undermine Ukrainian defenses. He has consistently been on the wrong side of history; because he never changes.

Signs of a private citizen politically manipulating service availability to favor certain foreign policy already had been there for all to see a year ago.

Ukrainian operators are both to credit for the mission, and also to be held responsible for entrusting Musk. Militant due diligence (anathema to the ignorance and impulsivity of coin-operated Musk) alerted Starlink’s wannabe-dictator to the existence of operations threatening Putin’s naval assets in Sebastopol harbor.

It was a simple and classic mistake. Ukrainians trusted a man who can never be trusted. They expected a rational response from a man who thrives on contrarian and cruel lies. It was like filing a support ticket with a Belorussian telephone company hoping old “war crimes” Lukashenko would do anything other than suck up to Russia.

Regrettably, just like many individuals who purchased a Tesla, someone fell victim to Elon Musk’s cunning manipulation (promising assistance but ultimately not delivering). This tactic is commonly referred to as advanced fee fraud. Musk’s actions included sporadic service provision combined with requesting full payments, all while pretending to “care” about Ukraine. Unfortunately, this was a fraudulent scheme that some mistook for genuine support for Ukrainian defense efforts.

Here is how and why Ukrainian intelligence was duped.

Areas in Ukraine occupied by Russia were being given a taste of intermittent network access on purpose by the duplicitous and compromised Starlink operation:

Starlink UP

  • RU occupied Kherson
  • RU occupied Zaporizhzhia
  • RU occupied Mykolaiv
  • RU occupied Kharkiv

Starlink FAIL

  • RU occupied Donetsk
  • RU occupied Luhansk
  • RU occupied Crimea

Political service map.

Shortly after Ukraine’s secret defense operation details for October 2022 were shared with the American company, meant to enhance network reliability under Starlink’s public commitments to assist in defense against Russia, Elon Musk seems to have employed this exact information to pursue a completely contrary agenda. He allegedly actively worked to undermine U.S. objectives, turned his assistance programs into a means to block operations, and promoted pro-Putin propaganda campaigns.

  • October 3, 2022 proposed a “George Blake peace” plan that involved Ukraine surrendering and ceding its territory to Russia
  • October 21, 2022 fraudulently asserted Donetsk, Luhansk and Crimea should be taken away from Ukraine

Did I mention the Logan Act?

If this seems too simplistic, as if Musk doesn’t have a clear enough motive for personally interfering to block U.S. executive branch policy (e.g. appear so treasonous), blame China.

Musk caring about Russia invading Ukraine, which seems to have nothing to do with his stated “business” or even personal interests, actually parallels top concerns of his Chinese handlers (e.g. defense of Taiwan).

Taiwan is “not for sale”, the island’s foreign minister said in a stern rebuke to Elon Musk who asserted Taiwan was an integral part of China, as the billionaire again waded into the thorny issue of relations between Beijing and Taipei… Last October, he suggested that tensions between China and Taiwan could be resolved by handing over some control of Taiwan to Beijing, drawing a similarly strong reprimand from Taiwan.

Last October. See?

Musk does whatever China says. Ukraine is a proxy.

It’s no secret Russia is China’s strategic ally in these conflicts and also a present testing ground for undermining U.S. foreign policy (let alone actual defense operations) by compromising selfish and greedy American tech executives.

Musk’s behavior, clearly favoring enemies of the country he claims to call home, really is not far removed from stories about a serial liar in a U.S. company jailed on charges of treason and espionage.

In 2016, when Elon Musk was questioned about Tesla’s “Autopilot” potentially causing fatalities, his response was marked by anger and defensiveness. He argued that since millions of people die in car accidents, he shouldn’t be expected to care about individual deaths related to his product. Subsequently, he initiated a deceptive public relations effort claiming that his cars were the safest and would save more lives than any others, even though Tesla vehicles continued to be involved in fatal accidents, surpassing the combined total of all other electric vehicles.

Then, in 2022, when informed about the use of drones to prevent ships from bombing civilian areas, Musk intervened with network services to ensure these ships could continue their actions, which resulted in harm to hundreds or thousands of innocent people including children. He subsequently launched a fraudulent propaganda campaign, taking credit for preventing a fake potential world war by stopping drones, despite the simple fact that the drones still managed to target the ships.

There’s a consistency to his interference with U.S. policy, “grossly inflated sense of self worth“, hatred of humanity and greedy pattern of self-serving propaganda.

Who better for China to compromise?

Source: Twitter