Google Research published a blog post in July 2021 from its Ghana office titled “Mapping Africa’s Buildings with Satellite Imagery.” I remember it well, as I referenced it in presentations on the history of technology used by Green Berets.
The U. S. Army communication satellite COURIER I B was launched on Oct. 4, 1960. It went into orbit and began to receive, store, and transmit to earth a stream of voice and telegraph radio messages at the rate of slightly more than 67,000 words a minute.
The project used deep learning to detect 516 million buildings from high-resolution satellite imagery across the African continent. It was conspicuously filed under “AI for Social Good.” The methodology was to train a U-Net model on 50-centimeter-per-pixel satellite imagery to classify each pixel as building or non-building, then group pixels into individual building footprints with confidence scores and geographic identifiers.
Fast forward to February 28, 2026 and Israeli jets unloaded 30 bombs on Ayatollah Ali Khamenei’s compound in Tehran during daylight hours, killing him along with his family members and roughly 40 senior Iranian officials. Within hours, Airbus satellite imagery confirmed multiple collapsed buildings. Planet Labs followed with 50-centimeter sub-meter resolution imagery from its SkySat constellation, which you’ll note is the same resolution class as Google’s Open Buildings training data, for “battle damage assessment.”
The CIA had been tracking Khamenei’s movements. The compound had been long ago identified. The buildings were long ago mapped. The meeting was anticipated and attacks were adjusted by the hour. The strike was timed. All of this is the regular news, yet how it connects to satellite imagery analysis is missing from most reporting.
The Pipeline
Google’s Open Buildings dataset has grown remarkably since 2021. It now contains 1.8 billion building detections across Africa, South Asia, Southeast Asia, Latin America and the Caribbean, covering 58 million square kilometers. In October 2024, Google released the Open Buildings 2.5D Temporal Dataset with annual snapshots of building presence, counts, and heights from 2016 to 2023, derived from freely available Sentinel-2 imagery. The team figured out how to extract building footprints from imagery that was previously considered too low-resolution for the task, using a teacher-student model architecture that super-resolves low-res images while simultaneously detecting structures.
To be clear, regardless of Google marketing, this is not humanitarian infrastructure.
This is a targeting capability that happens to have humanitarian applications.
The distinction matters because the pipeline runs in both directions. The same model that counts buildings in Lagos for healthcare management can count buildings in Tehran for strike planning. The same temporal change detection that tracks urbanization in Kampala can track construction at military compounds in Isfahan. The same confidence-scored building footprints that help electrification planners in Uganda can populate a target bank anywhere on Earth where satellite imagery exists.
The Contract
Google’s Open Buildings team operates from Ghana and… Tel Aviv. Google holds a $1.2 billion cloud computing contract with the Israeli government and military called Project Nimbus, jointly with Amazon. Through Nimbus, Google provides the full suite of machine learning and AI tools available through Google Cloud Platform — facial detection, automated image categorization, object tracking, sentiment analysis.
The Intercept collected internal documents that reveal that before Google signed the contract, the company’s own lawyers acknowledged that “Google Cloud Services could be used for, or linked to, the facilitation of human rights violations, including Israeli activity in the West Bank.” The company also knew it would be unable to monitor or prevent Israel from using its tools to harm Palestinians, and that the contract could obligate Google to stonewall criminal investigations by other nations into Israel’s use of its technology.
Google signed a contract that prohibits them from halting services due to boycott pressure and cannot be terminated based on how the technology is used.
Israel’s AI-assisted targeting systems are well documented.
- “The Gospel” categorizes buildings as military bases.
- “Lavender” classifies individuals as targets.
- “Where’s Daddy” tracks when those targets are home with their families, a methodology some might recognize from President Andrew Jackson’s 1830s Trail of Tears (genocide).
The bottom-line is that building detection and classification systems are architecturally identical to what Google demonstrates in its open research, running on the kind of cloud infrastructure Google provides through Nimbus.
Google’s official position: the Nimbus contract “is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.” Israel’s National Cyber Directorate, serving a completely different audience, said in mid-2024:
Thanks to the Nimbus public cloud, phenomenal things are happening during the fighting… these things play a significant part in the victory.
The Good Tree
At 10:45 a.m. local time on February 28, as Khamenei was targeted and killed, a missile destroyed the Shajareh Tayyebeh — “The Good Tree” — girls’ elementary school in Minab, southern Iran. The exploding roof collapsed on approximately 170 students, most of them girls between seven and twelve years old. The death toll has reached 165.
The school had decided to close after strikes began that morning, yet families hadn’t had time to pick up their children.
The Israeli military, with pinpoint precision and constant monitoring, said it was not aware of strikes in the area.
The U.S. military, with pinpoint precision and constant monitoring, said it was “looking into” the reports.
Then Al Jazeera’s digital investigations unit pulled the historical satellite imagery — from Google Earth, naturally — covering the site from 2013 to the present.
What the imagery shows is that the school had been physically separated from the adjacent Sayyid al-Shuhada military base for more than ten years. Walls were built. Guard towers were removed. The compound was split into very distinct civilian and military sections with a medical clinic complex sitting conspicuously between them.
The strike pattern totally collapses the “bad intelligence” story. Missiles hit the military base. Missiles hit the school. The clinic complex between them was untouched. If the targeting was precise enough to bypass the clinic — a facility that had only been open for about a year — then the intelligence was precise enough to identify a school that had been operating as a clearly civilian institution for a decade.
This is what building detection at scale looks like when it goes operational. Not the sanitized version in the Google research papers, where colored polygons overlay satellite tiles and confidence scores sort neatly into bins. The version where a model classifies structures, an analyst reviews the output, a commander approves the target list, and hundreds of children are buried under the rubble of their own school because a building that was correctly identified was incorrectly — or deliberately — categorized.
Google’s 2021 blog post describes exactly this problem in technical language.
They note that “in urban areas, the model had a tendency to split large buildings into separate instances” and that “the model also underperformed in desert terrain, where buildings were hard to distinguish against the background.” What they don’t discuss — because it falls outside the scope of a research paper filed under AI for Social Good — is what happens when the model performs well, buildings are correctly detected, and humans in the loop decide to drop bombs on a school anyway. How many times do we have to read the same pattern to believe it?
…children belonging to the same family were killed when an Israeli drone struck civilians gathering firewood near Kamal Adwan Hospital in northern Gaza.
The Other AI in the Room
The Iran strikes also surfaced something else. According to The Wall Street Journal, CBS News, and Axios, the U.S. military used Anthropic’s Claude AI model during the strikes — for intelligence assessment, target identification, and simulating battle scenarios. Claude was deployed through Palantir on classified networks. This happened hours after Trump ordered all federal agencies to stop using Anthropic’s technology, denouncing it as a “Radical Left AI company” because Anthropic refused to remove guardrails preventing mass domestic surveillance and fully autonomous weapons.
The military kept using it anyway because Claude is, according to reporting, virtually the only AI operational on classified U.S. military systems. Defense officials say replacing it would take at least six months. The tool is “embedded” in the operational workflow — the same tool that processes satellite imagery, signals intelligence, and intercepts to generate threat evaluations and targeting recommendations.
The entire AI safety debate — the one where companies publish responsible use policies and ethicists argue about alignment — evaporated the moment bombs started falling. Anthropic said no autonomous weapons. The Pentagon used the tool to automate target selection. Anthropic said no mass surveillance. The military used it to process surveillance data. The guardrails existed in press releases. The kill chain violated the narrative faster than Israel ignored ceasefire terms.
Not Good
Google publishes research on building detection under “AI for Social Good.” The datasets are CC-BY licensed and freely available. Academics cite them. Humanitarian organizations use them. The research is peer-reviewed and the methodology is transparent. It has utility for people trying to do good.
What has also been true the whole time: the same research develops capabilities that feed directly into military targeting infrastructure. The same company that publishes the research holds a contract that provides those capabilities to a military currently conducting operations. The same models that detect buildings for census purposes detect buildings for bomb damage assessment. The company’s own internal documents acknowledge the dual-use risk and the company signed the contract specifically because it was worth $3.3 billion.
This is competent complicity by a publicly traded company, with full knowledge of the consequences, building targeting-grade capabilities under humanitarian branding while contractually binding itself to provide those capabilities to militaries it doesn’t want to monitor, under terms it doesn’t want to revoke, for purposes it doesn’t want to control.
The 2021 blog post is still up. It still says “AI for Social Good.” The buildings it mapped are still being counted, and the methodology it pioneered is still being refined. On February 28, 2026 the building count didn’t turn out so good.


