How Palantir’s “God’s Eye” Created the Very Terrorists It Promised to Find

A Stryker vehicle assigned to 2nd Squadron, 2nd Stryker Cavalry Regiment moves through an Iraqi police checkpoint in Al Rashid, Baghdad, Iraq, April 1, 2008. (U.S. Navy photo by Petty Officer 2nd Class Greg Pierot) (Released)

From 2007-2014, Baghdad’s American-designed checkpoints were a daily game of “Russian Roulette” for Iraqi civilians. Imagine being stopped, having rifles pointed at your head, being harassed or detained simply because a computer system tagged you as suspicious based on the color of your hat at dawn or the car you drove.

This was the reality created by Palantir Technologies, which sold the U.S. military and intelligence community on the promise of a “God’s Eye” system that could identify terrorists through data analysis. But compelling evidence suggests their unaccountable surveillance system instead helped create the very terrorists they claimed they would find.

The evidence is stark: In 2007, Baghdad had over 1,000 checkpoints where Iraqis faced daily humiliation — forced to carry fake IDs and even keep different religious songs on their phones to avoid being targeted. By 2014, many of these same areas had become ISIS strongholds.

This wasn’t coincidence.

A pivotal WIRED exposé revealed how Palantir’s system nearly killed an innocent farmer because it misidentified his hat color in dawn lighting. U.S. Army Military Intelligence experts on the ground described their experience literally as:

“if you doubt Palantir you’re probably right.”

And here’s the key quote that encapsulates the entire broken system:

“Who has control over Palantir’s Save or Delete buttons?”

The answer: Not the civilians whose lives were being ruined by false targeting.

The Institute for War and Peace Reporting documented how these checkpoints created a climate of fear and sectarian division in 2007. Civilians were “molested while the real militants get through easily.” The system was so broken that Iraqis had to carry two sets of ID and learn religious customs not their own just to survive daily commutes.

Most damningly, military commanders admitted their targeting data was inadequate and checkpoint personnel had “no explosives detection technology and receive poor, if any, information on suspicious cars or people.” Yet Palantir continued to process and analyze this bad data, creating an automated system of harassment that pushed communities toward radicalization.

When ISIS emerged in 2014, it found fertile ground in the very communities that had faced years of algorithmic targeting and checkpoint harassment. The organization recruited heavily from populations that had endured years of being falsely flagged as threats — a tragic self-fulfilling prophecy. During this period, Palantir’s revenue grew from $250 million to over $1.5 billion — a for-profit-terror generation engine enriching a few who cared little or not at all about the harms. The American taxpayers were being fleeced.

Palantir marketed itself as building a system to find terrorists. Instead, it helped create them by processing bad data through unaccountable algorithms to harass innocent civilians until some became the very thing they were falsely accused of being. The company has never had to answer for this devastating impact.

As we rush to deploy more AI surveillance systems globally, the lesson of Palantir in Iraq stands as a warning: When you build unaccountable systems to find enemies, you may end up creating them instead.

We must ask: How many of today’s conflicts originated not from organic grievances, but from the humiliation and radicalization caused by surveillance systems that promised security while delivering only suspicion leading into extra-judicial assassinations?

Palantir’s profits are from failure. Their income an indicator of the violence they seed.

Note: This analysis draws on documentation from 2007-2014, tracking the relationship between checkpoint systems and the rise of ISIS through contemporary reporting and military documents.

AU Tesla Autopilot Gone Wild: Out-of-Control Robot Attacks Parked Cars, Owners

Saying “Out-of-Control Tesla” seems redundant at this point.

Wild footage captured the moment an out-of-control Tesla hit vehicles in a busy shopping centre carpark, before plummeting off the side and injuring its two occupants. The driver’s dash cam showed a black Tesla T-bone an SUV, causing it to spin on the rooftop carpark at DFO Homebush, in Sydney’s west, at about 9.55am on Saturday. The vehicle kept driving and struck the car with the dash cam. A loud crash was heard from the Tesla as it went over the edge of the carpark to the level below. The Tesla is understood to have been on autopilot…

You don’t want to be anywhere near a Tesla robot, obviously.

Why does Australia even allow them in the country? If they can ban assault automatic rifles they can ban assault automatic pilots. Tesla is a threat to public safety by design.

FL Tesla Kills One Motorcyclist

Another motorcyclist has been killed by Tesla, apparently by abruptly turning left in front of them. Boca News Now has reported it as “DEATH BY TESLA”.

According to investigators, the Tesla began to make a left turn into the path of the Kawasaki Ninja. The front of the Ninja impacted the passenger’s side front fender of the Tesla. As the crash ensued, the Tesla rotated counterclockwise and came to an uncontrolled final rest within the outside northbound lane of Seminole Pratt Whitney Road.

Driverless is suspected, given Tesla sensors have a long and tragic history of failing to see motorcycles and violently running into them.

10 out of 10 Jars of Honey in UK Suspected Fraudulent

It’s curious that honey quality detection is only just beginning to happen. How much fraud has there been before now if 100% is suspected?

In March 2023, the European Commission found that 46% of sampled products (including all 10 samples from the UK) were suspected to be fraudulent – meaning they had likely been bulked out with cheaper sugar syrups. Scientists at Cranfield University then said in August this year that they had found a way to detect fake honey products without opening the jar.

Or perhaps more to the point, why is the UK fraud (given nearly £100 million/year is imported) suspected at something like double the EU rate?

The “new” method of detection mentioned is actually Spatial Offset Raman Spectroscopy (SORS) with machine learning. It’s a light analysis technique used in pharmaceutical and security diagnostics that can ID sugar syrups.