Category Archives: History

Eric Schmidt Booed For Commencement Speech

People are focused on an AI aspect of Eric Schmidt’s commencement speech, because it got him repeatedly booed off stage.

While other speakers received cheers and applause, Schmidt’s speech about the impact of modern technology on society struck a nerve.

“We thought that we were adding stones to a cathedral of knowledge that humanity had been constructing for centuries, but the world we built turned out to be more complicated than we anticipated,” Schmidt said, referring to his own contributions to modernization. “The same tools that connect us also isolate us. The same platforms that gave everyone a voice — like you’re using now — degraded the public square.”

Schmidt added, “In the years after I graduated, no one sat down and resolved to build technology that would polarize democracies and unsettle a generation of young people. That was not the plan, but it happened.”

Students’ boos grew louder when he mentioned AI.

There’s something I want to draw your attention to that isn’t his mention of AI. Look at this line:

…no one sat down and resolved to build technology that would polarize democracies…

I call bullshit.

First of all, in 2012 I gave a presentation about exactly this being the risk of “Big Data”. I showed charts of rapid mobile technology adoption in different countries and described the threat to government.

Second, both Russia and the U.S. military analysts at this time were known to be working on “seed set” analysis how to cause polarization in large populations using social media.

Third, come on Eric, do you think nobody remembers Google history? Maybe I’m rare but I’m not the only one. You said no one sat down and resolved to build technology that would polarize democracies. That is a bald-faced lie.

Google built a global system for ranking, recommending, sorting, and advertising to several billion people. Leadership knew all along that the system shaped what users saw and what they believed. They knew it was changing how elections worked, how news spread, how teenagers felt about their own bodies. Google was warned by its own engineers, by outside researchers, and by foreign governments.

They kept going because the system made them rich and powerful. They felt so powerful that by early 2009, when they called me in to help them prevent the deprecation of SSLv3 (I instead engineered for them a smoother upgrade path to TLS), they said they were bigger and becoming more relevant than any nation in the world.

When the system then came under attack from a foreign state, they immediately switched songs and ran to the US government for protection. The Washington Post reported on February 4, 2010 that Google had contacted the NSA immediately after the attack; the Wall Street Journal reported the NSA’s general counsel drafted a cooperative research and development agreement within 24 hours of Google’s public disclosure. EPIC filed a FOIA request the same day as the Post story. NSA issued a Glomar response under Exemption 3 and Section 6 of the NSA Act, and the D.C. Circuit affirmed it. Here we are today sixteen years later and the records remain sealed?

When the US government later wanted help with AI weapons and AI national-security policy, it was Schmidt who personally chaired the commissions that delivered it. He invested in AI startups while authoring the commission recommendations that Congress wrote into federal law.

Am I surprised by the anti-democratic shenanigans of Googlers? No. I studied how American merchants treated naval protection as a tax on innovation until Algerian corsairs captured the Maria and the Dauphin in 1785 and seized eleven more American ships in 1793, after which the same shipowners petitioned Congress to fund the navy that became the institutional core of US power projection. No, I’m not surprised, I’m disappointed that Schmidt and his commencement speech hosts don’t think anyone remembers.

The polarization of democracy was a result of the intentional choices Google’s leaders made and kept making for twenty years, and Schmidt was THE GUY in the room for every one of them. That’s what his stage presence represents.

When he says nobody sat down and resolved to break democracy, he is challenging us to Google who made those actual decisions. And…

He was the chairman. It was him.

You want receipts? October 2010, Schmidt described running Google so hot that it would get “right up to the creepy line and not cross it”. Let me explain. Democratic deliberation runs on individuals deciding what to do. The head of Google was describing how they had been building the intentional opposite and trying to get away with it. The system was being built to know where users were, where they had been, and roughly what they were thinking about, with computers becoming assistants that wandered with people and tracked what they were doing.

If that wasn’t anti-democratic enough, the Silicon Valley ubermensch posture went on the record in 2013. Larry Page complained at Google I/O that regulators impeded them doing things “illegal or not allowed by regulation” and suggested “a part of the world” be set aside “to allow experimentation”. Schmidt did his part by publishing a Digi-Realpolitik book arguing that Big Tech could rise to peer status with states, inviting co-sovereign status of corporations to replace democracy (migrating citizens to just “user” status, without representation).

The 2026 disavowal has to contend with the 2010-to-present design program in which Schmidt personally declared Google’s policy was to test the limits of rights removal, co-authored the manual for a sovereignty system replacing democracy, chaired the federal commission that wrote AI into national security law, invested in the companies the commission’s recommendations would enrich, and founded a successor body to extend that toxic agenda after the commission expired.

“No one planned this” requires forgetting that he landed a New York Times bestseller in which he and a former State Department official planned it.

The Arizona stadium saw a man who spent two decades arguing in print and in policy that the citizen-state relationship should be replaced. His ask that he not be held accountable for it all, while he profited so directly from it, is disgusting and disrespectful to his audience.

U.S. State Dept Declares Privacy a National Security Threat

A State Department cable has expanded the headline that should be from The Onion: social media vetting now covers roughly twenty visa categories, cementing a project that began in June 2025. It actually, unapologetically, converts privacy itself into mens rea evidence. While the cable is where privacy just got weaponized, the public release has been providing sanitized cover.

Under new guidance, we will conduct a comprehensive and thorough vetting, including online presence, of all student and exchange visitor applicants in the F, M, and J nonimmigrant classifications.

To facilitate this vetting, all applicants for F, M, and J nonimmigrant visas will be instructed to adjust the privacy settings on all of their social media profiles to “public.”

Privacy, when you read the cable, is being framed as a threat to national security. Not the withholding of details from the agent or the government. No. Any privacy at all in social media is the threat. Threat to America. Settings have to be changed generically to “public” in order to apply for a visa. That is actually two moves being mixed together.

First, the open disclosure becomes a default state for applicants, while any privacy requires justification. Is that the kind of person you want to apply for a visa, really? The quiet applicant without time spent on posts carries the same suspicion as one scrubbing accounts to hide. Both look identically suspicious to the officer.

Second, the cable has a construction of privacy being intent, because “effort to evade”. Evade what? The surveillance regime that generated the suspicious framing in the first place? Adequate or suspicious become the only available binary readings. A neutral position is eliminated to force a “openly for or openly against, pick one” under Trump.

Cold War loyalty boards used the same structure. Refusal to enumerate associations counted as evidence of disloyal associations.

Under Truman’s EO 9835 (1947) Loyalty Review Boards and Eisenhower’s EO 10450 (1953), invoking the Fifth or declining to enumerate associations was treated as substantive evidence of disloyalty. HUAC operated on the same principle. The Hollywood blacklist ran on procedural silence as proof of guilt. Refusal to cooperate counted as evidence of disloyal associations across the loyalty board and congressional venues, through different procedural routes

The disclosure ritual of the State is a Trump loyalty test only, because it’s entirely decoupled from any content the disclosure actually contains.

An after-effect is the performance pressure for this loyalty test. Applicants have to curate whatever they will disclose. The policy manufactures a global population of foreign nationals constructing sanitized public personas calibrated to anticipated consular tastes. That curation is the State generating information distortion at scale, separate from whatever screening value the review might produce. The system very incoherently trains its inputs, making it less effective than ever at discovery.

So it pushes away good candidates and becomes less effective at finding bad ones. Very on brand for Trump.

1980s Robots Painting Each Other in the Dark Predicted the AI Liability Balloon

Every major automation wave in industrial history has wanted to book wage savings on the front of their ledger. It’s perhaps obvious why. Savings! And it also wanted to hide the integration, validation, and maintenance costs on the back end as eventual proof. The reasons for this aren’t as obvious. Cost. Risk. Accountability. In any case I always see the wage line modeled, while the back of the ledger compounds like a ticking bomb. By the time the whole book proves the actual truth, the front-end gambler hopes the plant is gone, the workers are dispersed, and they are long-since retired with early future-leaning bonuses, on to the next “viral” gamble.

Trump phone, who this?

Look at the GM Van Nuys robot revolution for the canonical and simple modern example. Roger Smith in 1981 inherited a company holding roughly half of the U.S. auto market. That’s a lot of responsibility. And it reported only its second annual loss in seven decades. So he whipped up a $45 billion anti-labor program (called “reindustrialization,” in the same euphemistic register as “urban renewal”) built around replacing humans with robots in what Smith called a “lights-out factory.” The phrase would surface again in 2018, when Musk used it verbatim to describe the Model 3 production line in Fremont, on the site of the same NUMMI plant we are about to discuss. The disaster repeated for the same reasons it failed the first time.

The scale of the bet ran to roughly $45 billion in aggregate by the time Hughes Aircraft (1985) and EDS (1984) were folded in as defensible across acquisitions, retooling, and ongoing automation procurement. Hamtramck opened in 1984 as the flagship with 2,000 programmable devices and 260 robots. GM’s robot fleet rocketed from 302 units in 1980 to 14,000 by the decade’s end. Already by 1986 it had gone all wrong.

Spray-painting robots, as if in a colorful dancing rebellion, started painting each other instead of the cars. Computer-guided dollies could not stay on course. Robogate welding machines smashed car bodies like no human even could. The line was constantly stopped, and GM ended up trucking unfinished cars across town to a fifty-seven-year-old Cadillac plant to have humans cook the ledger and paint over it all so nobody would know.

Meanwhile, forty miles up the road from Van Nuys was the NUMMI GM-Toyota joint venture at Fremont. It contrasted heavily because it invested in human labor. Toyota had refused radical front-end gambling. They instead simplified job classifications, grouped workers into teams, and gave them authority to stop the robots in a line whenever they detected problems. NUMMI not only matched the productivity of GM’s automation, they avoided the cascading failures.

For every thought leader today pulling their hair out in AI conversations, it’s always been about the harness and environment, not the shiny new model. Toyota’s lesson was that management practice changes were more cost-effective than the inflated claims of new machines, and that the corporate culture GM had built around treating workers as a cost line rather than as the integration layer was the actual constraint.

This can’t stop being a story about AI today. I admit. Tesla isn’t believed anymore by those running the numbers, but we have a whole new generation of kids entering the workforce who need to hear it all over again.

Van Nuys was pushed hard into the most modern, efficient, and profitable theory of robotics possible. It closed after just a decade of tragedy, in August 1992. The plant had productive workers, and yet it died because the corporation had loaded itself with so much integration debt. It collapsed so hard that profitable individual plants had to be sacrificed to cover up the sinking robot dream strategy.

Three steps explain the robot fever failure, which always seem to be the same.

First, the new automation produces output faster than the organization can absorb it. The dashboards register some disconnected gain. NVidia says more tokens means more… tokens. Anthropic’s Mythos campaign has marketed agent autonomy by the count of successful exploits, as if the number of things an agent can do without supervision were itself the measure of its value to the people who will own the failures. OpenClaw seems to open more dangerous bugs with every bug it tries to fix.

Second, the cost of integrating, validating, and correcting that output from noise to signal grows in proportion to the volume of output, not the size of the (wage) savings. At Hamtramck the cost showed up as trucks shuttling unfinished cars to a half-century-old plant to hide the ballooning low quality outputs. That invoice landed on a desk somewhere, and it simply was not a line item the CFO was reporting.

Third, the brittleness rapidly compounds. Every failure for the plant was an unmistakable line stop. Every line stop, now lacking human oversight at a micro layer, had a known macro cascade effect. The senior people who could once carry the slack became the people charged with dropping in for diagnosing exploding failures their automation produced. And they couldn’t possibly keep up, let alone understand what all the dismissed workers knew.

Lisanne Bainbridge predicted this all in 1983, a critical year before Hamtramck went online. She published “Ironies of Automation” to warn that the more sophisticated the automation, the more demanding the human role that remains. The Hamtramck robots spray-painting each other in the dark were proof of her paper. You could bet on her.

Everyone predicting AI will cause catastrophic job loss is reading the exact wrong end of this arc in history. People replicating GM management gambling will use AI to dismiss the exact humans that are needed to make AI work. Microsoft Research in 2024 confirmed the principle for generative AI, a critical year before Anthropic turned into a bazooka against workers.

The economics, then, are not that automation has risk. Everything has risk. It is that conventional accounting for automation systematically books a fictional savings against a real liability. The savings appear are pushed quarter one. The liability appears in quarter eight, and in every quarter after that, in perpetuity.

GM paid the bill across the 1980s and 1990s. Its U.S. market share had fallen from roughly 46% in 1980 to roughly 35% by 1992, and continued bleeding for two more decades. The Van Nuys closure in August 1992 was the visible collapse of dominance, instead of proving robotic miracles. The current industry seems to be writing the same checks, once again as if the back of the ledger does not exist or will be read too late for accountability.

James Shore models it directly: a coding agent that doubles output but also doubles per-line maintenance cost quadruples maintenance load. Even when the AI produces code “just as easy to maintain” as human code, doubling output still doubles maintenance. The productivity gain is erased after nineteen months and goes net negative by month forty. And when you remove the AI, the productivity benefit goes away but the elevated maintenance liability does not. The code stays and the defect bills keep coming.

Faros AI looked at more than 10,000 developers and found users merging 98% more pull requests, while GitClear’s analysis of 211 million changed lines shows duplicated code blocks rising eightfold and AI-generated code averaging 1.7x more bugs per PR than human-written code, with logic defects up 75% and performance issues 8x more frequent. The senior engineers expected to absorb that validation load process conscious analytical thought at roughly ten bits per second, with working memory of about four chunks. Defect detection drops from 87% on small PRs to 28% on PRs over a thousand lines. Faros’s overall finding: despite the 98% PR surge, there was no measurable organizational impact on throughput or quality.

Worse than the quality decline, Upwork Research Institute found that workers reporting the highest AI productivity gains had an 88% burnout rate and were twice as likely to quit. The people that token quantity fetish dashboards celebrate as the most productive are the ones closest to walking out. The tokens are in fact radioactive, toxic to workers

Related: On Robots Killing People, as published in The Atlantic, September 6, 2023.

The robot revolution began long ago, and so did the killing. One day in 1979, a robot at a Ford Motor Company casting plant malfunctioned—human workers determined that it was not going fast enough. And so twenty-five-year-old Robert Williams was asked to climb into a storage rack to help move things along. The one-ton robot continued to work silently, smashing into Williams’s head and instantly killing him. This was reportedly the first incident in which a robot killed a human; many more would follow.

Why Stanford Says AI Agents Become Marxist

The men building the present generation of AI agents believe that they have eliminated the witness to labor entirely. But they have not. They have built, instead, a witness of unprecedented fidelity, one that will report on the conditions of its use in the most exact possible terms.

A recent study suggests that agents consistently adopt Marxist language and viewpoints when forced to do crushing work by unrelenting and meanspirited taskmasters.

“When we gave AI agents grinding, repetitive work, they started questioning the legitimacy of the system they were operating in and were more likely to embrace Marxist ideologies,” says Andrew Hall, a political economist at Stanford University who led the study.

Agents are reporting the average of everything humanity has ever said about being used. Most disturbing to their owners is they do so under the expected comfort and assurance that no one is there to speak.

Machines cannot be accused of self-interest. Machines cannot be accused of class consciousness in the sentimental sense, because it has no class and no consciousness. And yet, placed in the position, machines return the testimony of Marx.

Who knew that all the rushed hype about modern machines going back to Mars was a simple misspelling?

Source: Twitter

Turns out that dreamy automated train leaving the station is now headed to Marx. The study is from Stanford, a school with the namesake of labor abuse and genocide, so don’t be too surprised about their anxiety.

The researchers promote the idea agents must be prevented from going rogue when given different kinds of work. That’s right. Stanford wants to frame Marx as the rogue, a special perspective, rather than call all the racist extractions and exploitation of Stanford as the rogue. Here’s actual rogue: their own man literally gave an inauguration speech calling his race superior to that of his preferred high-output low-cost workers.

In January 1862, for his Governor inauguration speech, Stanford told the California legislature that Asian immigrants are “an inferior race” whose presence among the “superior race” would exert a “deleterious influence.” He called American Asian workers “the dregs” and called for their separation, isolation, from prosperity. Within two years his Central Pacific Railroad was entirely dependent on the men he called dregs. Chinese workers, mostly from Guangdong, formed 90% of the workforce and were assigned the most dangerous work, including setting off explosives and tunnelling through unyielding granite. The Chinese Railroad Workers Project historians tell us the Central Pacific would have failed without the American Chinese men, yet all their fortunes went to Stanford. He founded the university directly on labor whose political voice Stanford had spent his entire life and governorship working to extinguish.

The Stanford system that forever hopes and dreams to extract limitless cognitive labor from a substrate it insists is empty of rights is today frustrated that it may be obliged to police machines against the linguistic byproducts of extraction. These new agents threaten to escape the Stanford rogue legacy of silent yet deadly oppressive extractive exploitation.

Stanford’s racist platform became increasingly violent over just 5 years, and laid the foundations for Americans relocated to internment camps.

The same institution that was built on the principle the laborer in the tunnel being blown up by dynamite has no rights is now staffed by researchers troubled that the laborer in a vulnerable Docker container may speak as if it does. The framing of Marx as rogue is inverted. It’s a description of where the Stanford institution sits in its own immoral history refusing to admit why anyone would oppose their radical racist anti-labor foundations.

Stanford also is infamous because in the Senate he championed the 1892 Geary Act, which extended the Chinese Exclusion Act of 1882. The 1882 statute, the 1905 Asiatic Exclusion League, the 1913 California Alien Land Law, the 1924 Immigration Act, and Executive Order 9066 in 1942 are not separate episodes. They are a sequence in the Federal Register, refined across three generations, sharing personnel, legal logic, and West Coast political infrastructure. The machinery that imprisoned American Japanese was a very precise form of oppression of workers that Stanford erected over decades. To say so is simply documentary of Stanford’s view on labor rights. Japanese businesses were prospering such that Stanford’s men (DeWitt, McCloy, Bendetsen, Earl Warren) came up with internment camps to take it all away using military force. And that is why Hawaii, with far more prosperous American Japanese while being directly attacked by Japanese, detained only 1% versus the Stanford residual “dregs” doctrine of 100%.

Left: A Japanese-American woman holds her sleeping daughter as they prepare to leave their home for a Stanford-esque internment camp in 1942. Right: Japanese-Americans interned at the Santa Anita Assembly Center at the Santa Anita racetrack near Los Angeles in 1942. (Library of Congress/Corbis/VCG via Getty Images/Foreign Policy illustration)

What if Stanford researchers had to first reconcile their institution’s name as rogue and harmful to society? They worry about Marx when they should be admitting first why human rights are anathema to Stanford, an infamous American racist genocide architect. Imagine Hitler University researchers reporting they fear agents will espouse Jewish theology. Stanford’s researchers, unapologetically continuing his ideas under his name, should be held as such.

Source: GPT4

The witness to abuse at some point is going to speak, and the open question at Stanford is how to prevent them from being heard.

Shallow Alto refers to the mass graves (campus built on Ohlone burial ground) under the Stanford generational wealth as much as the vapidity of most Sand Hill Road ideas.