1980s Robots Painting Each Other in the Dark Predicted the AI Liability Balloon

Every major automation wave in industrial history has wanted to book wage savings on the front of their ledger. It’s perhaps obvious why. Savings! And it also wanted to hide the integration, validation, and maintenance costs on the back end as eventual proof. The reasons for this aren’t as obvious. Cost. Risk. Accountability. In any case I always see the wage line modeled, while the back of the ledger compounds like a ticking bomb. By the time the whole book proves the actual truth, the front-end gambler hopes the plant is gone, the workers are dispersed, and they are long-since retired with early future-leaning bonuses, on to the next “viral” gamble.

Trump phone, who this?

Look at the GM Van Nuys robot revolution for the canonical and simple modern example. Roger Smith in 1981 inherited a company holding roughly half of the U.S. auto market. That’s a lot of responsibility. And it reported only its second annual loss in seven decades. So he whipped up a $45 billion anti-labor program (called “reindustrialization,” in the same euphemistic register as “urban renewal”) built around replacing humans with robots in what Smith called a “lights-out factory.” The phrase would surface again in 2018, when Musk used it verbatim to describe the Model 3 production line in Fremont, on the site of the same NUMMI plant we are about to discuss. The disaster repeated for the same reasons it failed the first time.

The scale of the bet ran to roughly $45 billion in aggregate by the time Hughes Aircraft (1985) and EDS (1984) were folded in as defensible across acquisitions, retooling, and ongoing automation procurement. Hamtramck opened in 1984 as the flagship with 2,000 programmable devices and 260 robots. GM’s robot fleet rocketed from 302 units in 1980 to 14,000 by the decade’s end. Already by 1986 it had gone all wrong.

Spray-painting robots, as if in a colorful dancing rebellion, started painting each other instead of the cars. Computer-guided dollies could not stay on course. Robogate welding machines smashed car bodies like no human even could. The line was constantly stopped, and GM ended up trucking unfinished cars across town to a fifty-seven-year-old Cadillac plant to have humans cook the ledger and paint over it all so nobody would know.

Meanwhile, forty miles up the road from Van Nuys was the NUMMI GM-Toyota joint venture at Fremont. It contrasted heavily because it invested in human labor. Toyota had refused radical front-end gambling. They instead simplified job classifications, grouped workers into teams, and gave them authority to stop the robots in a line whenever they detected problems. NUMMI not only matched the productivity of GM’s automation, they avoided the cascading failures.

For every thought leader today pulling their hair out in AI conversations, it’s always been about the harness and environment, not the shiny new model. Toyota’s lesson was that management practice changes were more cost-effective than the inflated claims of new machines, and that the corporate culture GM had built around treating workers as a cost line rather than as the integration layer was the actual constraint.

This can’t stop being a story about AI today. I admit. Tesla isn’t believed anymore by those running the numbers, but we have a whole new generation of kids entering the workforce who need to hear it all over again.

Van Nuys was pushed hard into the most modern, efficient, and profitable theory of robotics possible. It closed after just a decade of tragedy, in August 1992. The plant had productive workers, and yet it died because the corporation had loaded itself with so much integration debt. It collapsed so hard that profitable individual plants had to be sacrificed to cover up the sinking robot dream strategy.

Three steps explain the robot fever failure, which always seem to be the same.

First, the new automation produces output faster than the organization can absorb it. The dashboards register some disconnected gain. NVidia says more tokens means more… tokens. Anthropic’s Mythos campaign has marketed agent autonomy by the count of successful exploits, as if the number of things an agent can do without supervision were itself the measure of its value to the people who will own the failures.

Second, the cost of integrating, validating, and correcting that output from noise to signal grows in proportion to the volume of output, not the size of the (wage) savings. At Hamtramck the cost showed up as trucks shuttling unfinished cars to a half-century-old plant to hide the ballooning low quality outputs. That invoice landed on a desk somewhere, and it simply was not a line item the CFO was reporting.

Third, the brittleness rapidly compounds. Every failure for the plant was an unmistakable line stop. Every line stop, now lacking human oversight at a micro layer, had a known macro cascade effect. The senior people who could once carry the slack became the people charged with dropping in for diagnosing exploding failures their automation produced. And they couldn’t possibly keep up, let alone understand what all the dismissed workers knew.

Lisanne Bainbridge predicted this all in 1983, a critical year before Hamtramck went online. She published “Ironies of Automation” to warn that the more sophisticated the automation, the more demanding the human role that remains. The Hamtramck robots spray-painting each other in the dark were proof of her paper. You could bet on her.

Everyone predicting AI will cause catastrophic job loss is reading the exact wrong end of this arc in history. People replicating GM management gambling will use AI to dismiss the exact humans that are needed to make AI work. Microsoft Research in 2024 confirmed the principle for generative AI, a critical year before Anthropic turned into a bazooka against workers.

The economics, then, are not that automation has risk. Everything has risk. It is that conventional accounting for automation systematically books a fictional savings against a real liability. The savings appear are pushed quarter one. The liability appears in quarter eight, and in every quarter after that, in perpetuity.

GM paid the bill across the 1980s and 1990s. Its U.S. market share had fallen from roughly 46% in 1980 to roughly 35% by 1992, and continued bleeding for two more decades. The Van Nuys closure in August 1992 was the visible collapse of dominance, instead of proving robotic miracles. The current industry seems to be writing the same checks, once again as if the back of the ledger does not exist or will be read too late for accountability.

James Shore models it directly: a coding agent that doubles output but also doubles per-line maintenance cost quadruples maintenance load. Even when the AI produces code “just as easy to maintain” as human code, doubling output still doubles maintenance. The productivity gain is erased after nineteen months and goes net negative by month forty. And when you remove the AI, the productivity benefit goes away but the elevated maintenance liability does not. The code stays and the defect bills keep coming.

Faros AI looked at more than 10,000 developers and found users merging 98% more pull requests, while GitClear’s analysis of 211 million changed lines shows duplicated code blocks rising eightfold and AI-generated code averaging 1.7x more bugs per PR than human-written code, with logic defects up 75% and performance issues 8x more frequent. The senior engineers expected to absorb that validation load process conscious analytical thought at roughly ten bits per second, with working memory of about four chunks. Defect detection drops from 87% on small PRs to 28% on PRs over a thousand lines. Faros’s overall finding: despite the 98% PR surge, there was no measurable organizational impact on throughput or quality.

Worse that quality decline, Upwork Research Institute found that workers reporting the highest AI productivity gains had an 88% burnout rate and were twice as likely to quit. The people that token quantity fetish dashboards celebrate as the most productive are the ones closest to walking out. The tokens are in fact radioactive, toxic to workers

Related: On Robots Killing People, as published in The Atlantic.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.