
From 2007-2014, Baghdad’s American-designed checkpoints were a daily game of “Russian Roulette” for Iraqi civilians. Imagine being stopped, having rifles pointed at your head, being harassed or detained simply because a computer system tagged you as suspicious based on the color of your hat at dawn or the car you drove.
This was the reality created by Palantir Technologies, which sold the U.S. military and intelligence community on the promise of a “God’s Eye” system that could identify terrorists through data analysis. But compelling evidence suggests their unaccountable surveillance system instead helped create the very terrorists they claimed they would find.
The evidence is stark: In 2007, Baghdad had over 1,000 checkpoints where Iraqis faced daily humiliation — forced to carry fake IDs and even keep different religious songs on their phones to avoid being targeted. By 2014, many of these same areas had become ISIS strongholds.
This wasn’t coincidence.
A pivotal WIRED exposé revealed how Palantir’s system nearly killed an innocent farmer because it misidentified his hat color in dawn lighting. U.S. Army Military Intelligence experts on the ground described their experience literally as:
“if you doubt Palantir you’re probably right.”
And here’s the key quote that encapsulates the entire broken system:
“Who has control over Palantir’s Save or Delete buttons?”
The answer: Not the civilians whose lives were being ruined by false targeting.
The Institute for War and Peace Reporting documented how these checkpoints created a climate of fear and sectarian division in 2007. Civilians were “molested while the real militants get through easily.” The system was so broken that Iraqis had to carry two sets of ID and learn religious customs not their own just to survive daily commutes.
Most damningly, military commanders admitted their targeting data was inadequate and checkpoint personnel had “no explosives detection technology and receive poor, if any, information on suspicious cars or people.” Yet Palantir continued to process and analyze this bad data, creating an automated system of harassment that pushed communities toward radicalization.
When ISIS emerged in 2014, it found fertile ground in the very communities that had faced years of algorithmic targeting and checkpoint harassment. The organization recruited heavily from populations that had endured years of being falsely flagged as threats — a tragic self-fulfilling prophecy. During this period, Palantir’s revenue grew from $250 million to over $1.5 billion — a for-profit-terror generation engine enriching a few who cared little or not at all about the harms. The American taxpayers were being fleeced.
Palantir marketed itself as building a system to find terrorists. Instead, it helped create them by processing bad data through unaccountable algorithms to harass innocent civilians until some became the very thing they were falsely accused of being. The company has never had to answer for this devastating impact.
As we rush to deploy more AI surveillance systems globally, the lesson of Palantir in Iraq stands as a warning: When you build unaccountable systems to find enemies, you may end up creating them instead.
We must ask: How many of today’s conflicts originated not from organic grievances, but from the humiliation and radicalization caused by surveillance systems that promised security while delivering only suspicion leading into extra-judicial assassinations?
Palantir’s profits are from failure. Their income an indicator of the violence they seed.
Note: This analysis draws on documentation from 2007-2014, tracking the relationship between checkpoint systems and the rise of ISIS through contemporary reporting and military documents.