Russia Sabotages Crimea With Remotely Deployed Mines to Make it Uninhabitable

A buried lede in the news about the war in Ukraine is how Russia is remotely dumping mines to destroy Crimea for everyone.

Zaluzhny tells the Economist that the Kremlin is oblivious to the huge losses sustained by the Russian army – well over 100,000 men according to many estimates. In the last few days, Ukrainian Defense Minister Rustem Umerov says Russia has lost 4,000 men around Avdviika alone. Open-source imagery suggests the Russians may have lost up to 200 tanks and other vehicles in that battle.

[…]

Even where dense minefields are penetrated, often at great cost, the Russians restore them through remote mine-laying.

The real killer analysis, when you put both of these points in context, is that Russia has completely devalued human life.

They spread mines, throw away assets, and wallow in deep trenches as an antique and polar opposite to Ukraine, which clearly uses cutting edge light strike methods as it fights for human rights.

Remote mine laying is old hat Cold War stuff for Russia. However, in typical evil snark (a nod to Molotov’s “bread baskets“), they have named their newest system “Земледелие” (agriculture) because in minutes an entire field 15 kilometers away in Crimea is “planted” with humanity destroying munitions.

OpenAI Interim-CEO Maybe Responsible for Over 40 Deaths

Update a day later: the new CEO was just abruptly replaced by another one. Three CEO in three days, for a company that claims to have the best prediction algorithm in history.

Update another day later: the CEO who replaced the CEO, was replaced by the first CEO, making American business culture look like it has exactly no clues on national security.


In the short career path of the newly minted CEO of OpenAI is a curious trail of software dumpster fires and fatalities.

Start with she graduated 2012 in mechanical engineering from the Thayer School of Engineering at Dartmouth College in Hanover, New Hampshire. That’s 10 years ago.

When she was asked last year about her favorite film (a famous story about the struggle between humanity and their runaway monsters — how to address societal threats such as the killer HAL computer in the science thriller 2001) she offered some alarmingly empty thoughts.

A Space Odyssey continues to stir my imagination with its imagery and music, especially in the breathtaking sequence where the space shuttle docks accompanied by the waltz of Johann Strauss’s Blue Danube Waltz, inciting contemplation of the weightlessness of this event and the magnificence of the moment.

Ok. The waltz of the waltz. The weightlessness of being weightless.

Those are truly the most vacuous possible remarks about the movie 2001 (pun not intended).

A villainous machine is out to kill all humans and the entire movie recap (by a supposed AI expert!) is that her breath was taken away by the “weightlessness” of the waltz in a Strauss waltz?

That’s it?

Did she really even watch this movie? (TL;DR as I presented in 2011 is that technology tends to disrupt faith in humanity, enabling dangerous “superhuman” mythology.)

Does she describe Silence of the Lambs as a nice soundtrack for the magnificense of rubbing on skin lotion?

Right away it sounded illogical to me that anyone could appoint this person to CEO of anything related to AI. So I searched for more context and experience.

From her mechanical engineering degree she took a software job at Tesla in 2013 where her resume then claims credit for being product manager for perhaps the most infamously murderous AI in history.

I’d point to her LinkedIn profile but it just disappeared from public view.

You have to remember that AI in 2013, her foundation move, was completely untrustworthy. The level of overconfidence and credit given herself is a huge red flag on her resume. It’s like her saying she was the project manager for overpromised underdelivered flying machines in 1883, the ones that always crashed and burned. Not a good thing.

And then she seems to try and float a narrative that, after joining OpenAI in 2018 as project management, she alone built ChatGPT?

No CTO of any quality would ever claim to have alone built the thing that hundreds of people worked on. That’s the most toxic CTO position possible.

I mean is she so responsible for these products she helped with that we now can assign her any blame for the widely documented failures including fatalities? For the legions of bugs let alone all the deaths is she asking for personal fault? Her product “terms of use” indicate… she doesn’t do accountability.

Source: ChatGPT terms of use

The Verge has reported the tragedy of her engineering failures plainly as

…hundreds of crashes involving Tesla vehicles using FSD and Autopilot and dozens of deaths…

Is she only taking credit for successes and no failures, to elevate herself into talk shows and pay raises, hoping to pin someone else with the cleanup and cost?

The person who thinks that 2001 is a pleasing musical about light (weightless) topics, and who unleashed a mass killer robot (Tesla), is somehow suddenly CEO of OpenAI with only 10 years of experience?

Doesn’t add up. Weightless might be the right term.

Let’s take for a minute the argument from the OpenAI board (and its new CEO) that the old CEO Sam Altman was moving too fast and opaquely… so they quickly fired him with no warning.

Got that hypocrisy?

The first and foremost obligation of the board of directors, if you ask shareholders, is to the shareholders. This should not be news. In this case, shareholders really means just one: Microsoft (e.g. $13 billion given by them to OpenAI to make Bing better than Gates’ 1990s corrupt “clippy” bot disaster). And yet, Microsoft was not informed, not at all.

Talk about untrustworthy leadership.

Staff of OpenAI?

Also not informed, not at all, setting up an internal political bloodbath of fear and loyalty. Expect Microsoft in “full evil ahead” mode to swoop in and buy every coin-operated OpenAI loyalist to Altman, ruthlessly gutting the company.

Those huge errors, all the way up and down the spectrum of dissent, are some easily avoidable massive failures of board-Level diligence right out of the gate.

Perhaps it’s an amateur executive hour at OpenAI because… well, look again: a very short resume with some notable catastrophic failures including real world robot deaths, let alone obviously empty comments about the gravity of fictional robot deaths.

Next I expect someone to ask me whether her only ethics training was just her internship on Wall Street (2011 Goldman Sachs), which brings to mind another movie. I wonder if, while so rapidly climbing corporate ladders, the OpenAI team hums the soundtrack from…

Why Are Bats So Healthy? They Fly

Apparently if bats didn’t have wings they’d be as sick as dogs.

The key to bats’ health seems to be flight, or at least the effects that evolving flight has had on the bat body.

For all the billionaires trying to cheat death, who wish to stop aging as if making some childish fantasy novel into reality, the answer seems obvious.

Grow a pair of wings over a million years.

But seriously, immortality is best achieved through acts of charity.

OpenAI Fires CEO Sam Altman for Lies

A few months ago I wrote that Sam Altman seems to lying on purpose, just like the product he became infamous for spreading.

My argument back then was nobody should trust OpenAI (or Microsoft) when they promise data is safe in ChatGPT.

Nope. NOT safe.

…Microsoft reportedly suspended use of ChatGPT internally a few days ago.

Or more to the point, ChatGPT seemed to be a lying machine of constant integrity breaches, not unlike Altman’s other dubious ventures.

A CEO of a software company allowing constant state of data integrity breaches is no accident; that’s a management decision. It’s like a financial company that can’t balance its books, or a payment card processor with constant privacy breaches.

Apparently my assessment of trust was closer to the truth than even I realized because OpenAI just fired their CEO, citing an inability to believe him.

Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

Will they fire ChatGPT next?

But seriously, an interesting footnote is the board also says they forced their chair to step down but didn’t fire him. That message hints at a conspiracy without sufficient evidence to hold the chair as accountable as Altman.