Palantir AI Predicted to “unpredictably escalate the risk of conflict”

I expect financial headlines pivoting from the latest AI research could be “buy buy buy” as cynical hawks on Wall Street tend to calculate nuclear war escalation as just “good for business”.

However, to any rational thinking person who doesn’t want to murder their neighbors for profit, the real news is that Palantir is yet again being exposed as toxic to humanity; the worst possible bet.

All these AIs are supported by Palantir… [with] demonstrated tendencies to invest in military strength and to unpredictably escalate the risk of conflict – even in the simulation’s neutral scenario.

See any profit motive ethical issues in the unpredictable escalation of conflict?

Palantir has been known for many years to get away with extra-judicial mistargeted killings and engage in willful privacy violations to undermine political opponents. They have been repeatedly exposed as an engineering culture driven by political extremism; a text book example of the kind of company that should have been banned since the 1960s when AI warnings really started. Using Palantir today reminds me of the kind of thinking that allowed East Germans to boast their oily, smelly, weak Trabant was in “high demand”… up to the day the wall came down. Nobody in their right mind would ever buy into Palantir.

Invest into stock from a warmongering unpredictability machine known for being wrong most of the time? It’s like Wall Street rubbing it’s hands over high targets for revenue from a bridge building company that pours money into the water as it can’t get to the other side.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.